Unsupervised pretraining transfers well across languages

Unsupervised pretraining transfers well across languages

Abstract

Cross-lingual and multi-lingual training of Automatic Speech Recognition (ASR) has been extensively investigated in the supervised setting. This assumes the existence of a parallel corpus of speech and orthographic transcriptions. Recently, contrastive predictive coding (CPC) algorithms have been proposed to pretrain ASR systems with unlabelled data. In this work, we investigate whether unsupervised pretraining transfers well across languages. We show that a slight modification of the CPC pretraining extracts features that transfer well to other languages, being on par or even outperforming supervised pretraining. This shows the potential of unsupervised methods for languages with few linguistic resources.

\patchcmd
\name

Morgane Rivière, Armand Joulin, Pierre-Emmanuel Mazaré, Emmanuel Dupoux \addressFacebook AI Research, Ecole des Hautes Etudes en Sciences Sociales 1

{keywords}

Unsupervised pretraining, low resources, cross-lingual

1 Introduction

Learning phoneme representations remains a challenge for a large number of languages with limited supervised resources. A common approach is to pre-train these representations on a large supervised corpus in other languages and transfer them to the low resource languages [12, 14]. For example, Vesely et al. [29] learn a shared representation on a supervised multilingual dataset and finetune it on the target language. This pre-training works even between distant languages, but requires massive supervised corpora in the same domain.

Recently, several works [27, 15] have proposed promising methods to train monolingual audio representations without supervision. In particular, Schneider et al. [22] shows that the unsupervised pre-training method of van den Oord [27] improves the quality of automatic speech recognition (ASR) on several competitive benchmarks. In this paper, we are interested to see if similar unsupervised pre-training methods can be leveraged in a cross-lingual setting to improve the quality of phoneme representations for low resource languages.

We focus on the contrastive predictive coding (CPC) method of van den Oord [27] since Schneider et al. [22] has shown its benefit for pre-training features for ASR. CPC is a form of forward modeling in the feature space [4]: it predicts the near future windows in an audio sequence while contrasting with windows from other sequences or more distant in time. We introduce several modifications to the original approach stabilize the training and lead to better phoneme representations. We use our modified CPC model to pre-train phoneme representations in English, namely on Librispeech, and transfer them to several low-resource languages from the Common Voice database.

In this paper, we obtain several results related to transferring across languages the features pre-trained without supervision. First, pre-training phoneme representation outperforms representations trained from scratch in the target language, even if we do not use any supervision for the pre-training. Suprisingly, we also observe that the gap between unsupervised and supervised pre-training is relatively small if we use the same pre-training corpora. Finally, scaling unsupervised pre-training to larger unlabelled datasets further reduces the gap with the supervised pre-training features, and even surpasses it in some low-resource languages.

2 Related work

2.1 Multilingual pre-training for speech recognition

A common way to improving speech recognition in low-resource languages is to train multilingual speech recognition with shared components [24, 25, 14]. For example, Stolcke et al. [25] train features for phoneme classification in a different language. Burget et al. [3] shares the parameters of a Gaussian Mixture Model. Closer to our work, several works have shared the parameters of a neural network encoder, using feedforward networks [29, 12, 14] or LSTM [19]. The model is then finetuned on the target low-resource language to fit its specificities [6]. The sampling of the languages during the pre-training can focus on languages related to the targeted language [19]. Another approach is to encourage a language-independent encoder with an adversarial loss [1]. As opposed to our work, this line of research focuses on supervised pre-training which restrict its impact to domains or languages with large resources for supervision.

2.2 Unsupervised learning of features

Many unsupervised learning approaches have been proposed for speech and we focus on those based on contrastive learning [4, 30, 23]. In particular, Time Contrastive Learning [15] learns audio features by discriminating between time windows. Our work closely follows van den Oord et al. [27] where a contrastive loss is used to predict forward representations in an audio sequence. Their Contrastive Predictive Coding (CPC) objective function is similar to the objective of word2vec [20], applied to sequences instead of words. Contrastive approaches are also related to examplar self-supervision [7, 2, 13]. However, CPC has the advantage of making no assumption about the nature or number of the training data samples. Recently, variants of CPC have been applied to monolingual ASR [22] and images [26].

3 Approach

In this section, we rapidly introduce the approach of van den Oord et al. [27] and we refer the reader to the original paper for details. We also present several modifications to improve the resulting representations and stabilize the training. We made our code as well as our experiments available to the public2.

3.1 Contrastive Predictive Coding

Unsupervised training of neural networks relies on building a pretext task that requires discriminative features to be solved. The pretext task used in Contrastive Predictive Coding (CPC) [27] is forward modeling, i.e., predicting the future states of a sequence from its past. The particularity of CPC is to frame forward modeling as a reconstruction of future representations, not future inputs. Past and future representations are built from the same model, and a contrastive loss ensures that temporally nearby representations are pushed closer than temporally distant ones.

More precisely, given an audio sequence splitted in discrete time steps, or windows, we embed the input signal at each time step with a encoder. Then, we form the current phoneme representation by applying a sequence model to the resulting sequence of embeddings, i.e.,

where is the encoder and is the sequence model, parametrized by and respectively. In CPC, the encoder is a -layer convolutional network (kernel sizes: 10,8,4,4,4, stride sizes: 5,4,2,2,2) and the sequence model is a -layer Gated Recurrent Units (GRU).The encoder also has a downsampling factor of , meaning that for a kHz input, each feature encodes ms of audio.

Given this phoneme embedding , the pretext task in CPC is to predict the next future representations, i.e., for . CPC also pushes away representations from a random subset  of negative examples, or “distant” windows. Overall, the loss function at a time step is thus:

(1)

where is a linear classifier. There are many ways to sample the “distant” windows and we follow van den Oord et al. [27] by sampling negative within speaker. The parameters , and are learned with stochastic gradient descent.

3.2 Modifications to Contrastive Predictive Coding

Stabilization of the training

We observe empirically that the training of CPC is unstable, and can converge to poor solutions. The reason is the presence of batch normalization  [16] between the layers of the encoder. Indeed, batch normalization parameters are learned by computing statistics over the whole batch. Since the encoder is shared across a sequence, these parameters leak information between past and future windows. This makes minimizing eq. (1) trivial when the batch normalization is activated, resulting in instability. We fix this issue by replacing batch normalization with a channel-wise normalization that plays a similar role of conditioning internal representations. As oppposed to batch normalization, the parameters are not shared across the sequence and do not leak information (see Supplementary Section S1.1 for details).

Improving the model

The prediction of future representations is made by linear classifiers on top of a phoneme embedding, as shown in eq. (1). The motivation is to encourage the phoneme embeddings to encode linearly separable phonemes. However, the future representations are not phoneme representations themselves; they are embeddings of the time window. Comparing the outputs of a sequence model and an encoder with a linear classifier may not result in linearly separable phoneme representations. Several alternatives are possible, such as adding a sequence model on the future representations. In practice, we find that replacing each linear classifier with a -layer Transformer network [28] works well (see Supplementary Section S1.2 for details). This layer accesses the entire sequence of to predict a particular . We also observe that reducing the dimension of convolutionnal layers from to does not impact performance while reducing memory footprint. Finally, using an LSTM instead of a GRU slightly improves the performance.

3.3 Transferring features to phoneme classification

In this work, we evaluate the quality of phoneme representations trained with no supervision when transferred across languages. Standard cross-lingual approaches finetune their pre-trained network on the targeted language. While this improves the quality of the resulting representations, it does not assess the quality of the pre-trained representations. Instead, we freeze the model after the pre-training and simply learn a linear classifier for the targeted language. Specifically, we perform the linear classification of a concatenation of windows to match the average size of a phoneme. We then use the CTC loss between our model predictions and the non-aligned phoneme transcriptions [10]. This procedure explicitly measures the linear separability of the phoneme representation, once transferred to a target language.

4 Experimental setting

Pre-training on the Librispeech dataset

We pre-train models on the English Librispeech dataset (LS). We consider both the h and h splits of clean data. For the supervised pre-training model, we use the aligned phone labels provided by [27] for Librispeech-h.

Transferring to the Common Voice database

After the pre-training, we freeze the parameters of our models and transfer the features across languages. We consider the common Voice database3 as it comes in many languages. We retrieve the non-aligned phoneme transcription of each audio sample by running the open-source tool phonemizer4 on their corresponding text scripts. We split our dataset between train, validation and test sets along speakers to reduce the influence of speakers on the performance of phoneme predictions. We consider two train sets of either or hours. We will open source our train-test splits along with our code.

Measuring phoneme separability on Zerospeech2017

Zerospeech2017 is a dataset made to measure phoneme separability of unsupervised models in different languages. We consider the English, Mandarin and French benchmarks and we report the ABX score on them [21]. The ABX score measures the discriminability between phonemes by estimating the probability speech segments to be closer to one another if they encode the same phoneme than if they don’t (the distance being DTW-realigned average frame-wise cosine).

5 Results

5.1 Within-language results

In this set of experiments, we compare the original CPC with our modified version on two within-language tasks: phoneme discriminability on the English Zerospeech2017 dataset, and phoneme linear separability on Librispeech [27]. In Table 1, we compare our ABX score with that of the toplines from the Zerospeech leaderboards. It is interesting to note that CPC does not perform well on this metric but our modified version is on par with the state of the art. Overall, our modified CPC surpasses the original model on phoneme classification and even matches unsupervised approaches dedicated to phoneme separability. In Table 2, we show that our modifications to CPC leads to an improvement of 3.4 points in phoneme classification compared to the original CPC implementation.

Across Within
Trained on ZeroSpeech2017 (45 h)
Supervised topline [8] 6.9 5.3
Heck et al. [11] 8.7 6.2
Chorowski et al. [5] 8.0 5.5
Trained on Librispeech-360
CPC [27] 13.0 9.6
Modified CPC 8.5 6.5
Table 1: Phoneme discriminability within languages. Within- and across-speakers ABX scores for the English Zerospeech2017 test set. We compare CPC and modified CPC trained on Librispeech-360 to the best performing models.
Phone accuracy
Supervised topline 76.3
CPC [27] 65.5
Modified CPC 68.9
Table 2: Phone classification within language. Accuracy on the English LibriSpeech-h dataset for a linear classifier trained on top of frozen features obtained with the original and our modified CPC model.
Model Pretraining Frozen du es fr it ky ru sv tr tt zh Avg
From scratch - No 84.7 95.9 95.1 95.0 81.5 97.7 86.1 83.1 72.9 84.3 87.6
Bottleneck [9] Babel-1070h Yes 47.9 36.6 48.3 39.0 38.7 45.2 52.6 43.4 42.5 54.3 44.9
Supervised LS-100h Yes 42.4 36.4 47.0 40.5 41.0 43.6 47.0 48.5 41.5 56.8 44.5
CPC [27] LS-100h Yes 51.5 44.2 54.5 47.0 44.8 49.0 54.0 54.7 48.9 60.1 50.9
Modified CPC LS-100h Yes 44.4 38.7 49.3 42.1 40.7 45.2 48.8 49.7 44.0 55.5 45.8
Modified CPC LS-360h Yes 42.5 38.0 47.1 40.5 41.2 43.7 47.5 47.3 42.0 55.0 44.5
Table 3: Transfer of pre-trained phoneme features across languages. We pre-train the features on h and h of Librispeech with supervision (“Supervised”) or not (“CPC” and “Modified CPC”). We also include multilingual bottleneck features (“Bottleneck”) pre-trained on h from the Babel dataset. We train a linear classifier on the frozen features using h of speech from the Common Voice database in different languages. We also report a supervised model trained entirely from scratch on the h of speech. We report Phone Error Rate. The languages are: Dutch (du), Spanish (es), French (fr), Italian (it), Kyrgyz (ky), Russian (ru), Sweedish (sv), Turkish (tr), Tatar (tt) and Mandarin (zh).
French Mandarin
A. W. A. W.
Trained within language
Supervised topline 9.1 6.8 5.7 4.2
Heck et al. [11] 11.7 8.7 7.4 7.9
Chorowski et al. [5] 10.8 7.5 11.2 10.7
Trained on English (Librispeech-360)
CPC [27] 18.0 12.3 11.5 10.0
Modified CPC 14.6 10.0 9.5 8.9
Table 4: Phoneme discriminability of unsupervised features across languages. Across- (“A.”) and within-speakers (“W.”) ABX scores on French and Mandarin speech for CPC features pre-trained in English. For comparison: the best systems plus supervised topline of the Zerospeech leaderboard trained within-language.

5.2 Cross-lingual transfer of phoneme features

In a first experiment, we consider the problem of phoneme classification across languages on the Common Voice database. In Table 3, we report the phone error rate (PER) for the linear classifiers trained on top of the phoneme features pretrained with and without supervision. We also compare with a model trained from scratch on the target dataset. The training set of each target dataset is only hour long. The model trained from scratch thus performs poorly. On the other hand, pre-trained features significantly improve the performance in all languages, even without any finetuning. First, on hours of librispeech, our modified CPC outperforms the original CPC by 5.4 points on average. However, supervised pre-training still performs slightly better ( points) than our unsupervised pre-training on the same corpus. An advantage of unsupervised pre-training is that we can apply it to any larger unannotated dataset. We show the benefits of this by pre-training our modified CPC on hours of unlabelled data from Librispeech and match the performance of the supervised model. This result not only confirms the findings of [22] but it also shows that unsupervised pre-training can match supervised pre-training with enough data (see Supplementary Section S2 with the larger Libri-light dataset [17]).

In a second experiment, we compare the quality of our pre-trained features against other unsupervised methods on the Zerospeech2017. In Table 4, we compare on French and Mandarin, the ABX score of our approach trained on English Librispeech with unsupervised methods trained for these languages. Surprisingly, our English features transfered to other languages are competitive with the top lines of the leaderboard. This result further shows that unsupervised pre-trained features generalize well across languages.

Impact of finetuning phoneme features

Model pretraining frozen finetune
From scratch - - 38.3
Supervised LS-100 37.6 29.2
CPC [27] LS-100 43.5 33.3
Mod. CPC LS-100 38.8 31.0
Mod. CPC LS-360 37.2 30.7
Table 5: Comparison between frozen and fine-tuned features. PER averaged over languages (Spanish, French, Italian, Russian and Tatar). The training set for each language contains hours extracted from the Common Voice database.

We also study the impact of fine-tuning the phoneme features instead of freezing them. We use hours of speech in target languages for this experiment. In Table 5, we compare the difference between frozen features and fine-tuning. As for the experiments on h of speech, our approach is on par with supervised pre-training when the features are frozen. We also observe a boost around performance points for all the pre-training methods when we fine-tune the features. Our approach is still relatively competitive with supervised pre-training, but slightly worse ( points) on average.

6 Conclusion

Pre-training in a given language, with or without supervision, can produce features usable across other languages and other domains. Moreover, these features can be matched with a set of phonemes even with extremely low resources datasets and unaligned labels. They are usable with a very simple linear model and can be trained at low cost. Finally, though supervised pre-training tends to be better than the unsupervised one, the gap between them is small and can be greatly reduced with the use of a larger amount of unlabelled data. We did not attempt to push numbers in order to achieve good phone error rates in the low resource languages, as we only tested a linear separation layer for phoneme classification. Further work needs to be done to establish how these pretrained features can be best used in the low resource setting (see [18]), and with other ASR tasks [17].

S1 Supplementary methods

We describe here ablation experiments comparing our reimplementation of the original CPC model [27] and improvements we made to this model.

s1.1 Changing the normalization method

In order to make the training more stable, we replaced the batch normalization in the original model with layer normalization. The results are illustrated in Table S1.

Across Within
Trained on Librispeech-100
CPC [27] 13.0 9.6
CPC + Layer norm (LN) 12.0 8.7
Table S1: Impact of the normalization method on the phoneme discriminability. Within- and across-speakers ABX scores for the English Zerospeech2017 test set.

s1.2 Choosing the right predictor design

We compared several alternatives to the linear prediction model initially presented in [27]. We supposed that if the prediction network is too simple, then the auto-regressive network will perform a significant part of the prediction task. Thus we though that more complex architecture would improve the quality of our output features. The results of our experiments are compiled in Table S2.

Across Within
Trained on Librispeech-100
CPC + LN 12.0 8.7
CPC + LN + Conv8 13.4 9.2
CPC + LN + FFD 11.7 8.56
CPC + LN + transformer 9.5 7.3
CPC + LN + transformer + dropout 9.3 6.8
Table S2: Phoneme discriminability for various predictors design. Within- and across-speakers ABX scores for the English Zerospeech2017 test set.

S2 Supplementary results

Here, we present results on the CPC features trained on the recently released Libri-light 60K dataset[17]. As seen in Table S3, we now beat both the Bottleneck and Supervised features on all languages except one. The comparison between Bottleneck and CPC features is displayed in Figure S1.

Model Pretraining Frozen du es fr it ky ru sv tr tt zh Avg
Bottleneck [9] Babel-1070h Yes 47.9 36.6 48.3 39.0 38.7 45.2 52.6 43.4 42.5 54.3 44.9
Supervised LS-100h Yes 42.4 36.4 47.0 40.5 41.0 43.6 47.0 48.5 41.5 56.8 44.5
Modified CPC LL-60K Yes 43.1 36.4 44.3 37.8 37.5 42.4 46.5 45.7 40.6 53.2 42.7
Table S3: Transfer of pre-trained phoneme features across languages. Phone Error Rate on linear classification of phonemes based on pre-trained features on kh of Libri-light, compared to multilingual bootleneck features (“Bottleneck”) trained on h from the Babel dataset and a supervised baseline trained on LibriSpeech 100h clean. The linear classifier is trained on the frozen features using h of speech from the Common Voice database in different languages. We report Phone Error Rate. The languages are: Dutch (du), Spanish (es), French (fr), Italian (it), Kyrgyz (ky), Russian (ru), Sweedish (sv), Turkish (tr), Tatar (tt) and Mandarin (zh).

Figure S1: CPC versus Bottleneck features. The CPC features here have been trained on the 60Kh libri-light dataset.

Footnotes

  1. thanks: * Code and data available in https://github.com/facebookresearch/CPC_audio. This is the extended reprint of: Rivière, M., Joulin, A., Mazaré, P.E. and Dupoux, E. (2020). Unsupervised pretraining transfers well across languages. in ICASSP-2020.
  2. https://github.com/facebookresearch/CPC_audio
  3. https://voice.mozilla.org
  4. https://gitlab.coml.lscp.ens.fr/mbernard/phonemizer

References

  1. O. Adams, M. Wiesner, S. Watanabe and D. Yarowsky (2019) Massively multilingual adversarial speech recognition. arXiv:1904.02210. Cited by: §2.1.
  2. P. Bojanowski and A. Joulin (2017) Unsupervised learning by predicting noise. In ICML, Cited by: §2.2.
  3. L. Burget, P. Schwarz, M. Agarwal, P. Akyazi, K. Feng, A. Ghoshal, O. Glembek, N. Goel, M. Karafiát and D. Povey (2010) Multilingual acoustic modeling for speech recognition based on subspace gaussian mixture models. In ICASSP, Cited by: §2.1.
  4. S. Chopra, R. Hadsell and Y. LeCun (2005) Learning a similarity metric discriminatively, with application to face verification. In CVPR, Cited by: §1, §2.2.
  5. J. Chorowski, R. J. Weiss, S. Bengio and A. Oord (2019) Unsupervised speech representation learning using wavenet autoencoders. arXiv:1901.08810. Cited by: Table 1, Table 4.
  6. S. Dalmia, R. Sanabria, F. Metze and A. W. Black (2018) Sequence-based multi-lingual low resource speech recognition. In ICASSP, Cited by: §2.1.
  7. A. Dosovitskiy, J. T. Springenberg, M. Riedmiller and T. Brox (2014) Discriminative unsupervised feature learning with convolutional neural networks. In NIPS, Cited by: §2.2.
  8. E. Dunbar, X. Cao, J. Benjumea, J. Karadayi, M. Bernard, L. Besacier, X. Anguera and E. Dupoux The zero resource speech challenge 2017. arXiv:1712.04313. Cited by: Table 1.
  9. R. Fer, P. Matějka, F. Grézl, O. Plchot, K. Veselỳ and J. H. Černockỳ (2017) Multilingually trained bottleneck features in spoken language recognition. Computer Speech & Language 46, pp. 252–267. Cited by: Table S3, Table 3.
  10. A. Graves, S. Fernández, F. Gomez and J. Schmidhuber (2006) Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In ICML, Cited by: §3.3.
  11. M. Heck, S. Sakti and S. Nakamura (2017) Feature optimized dpgmm clustering for unsupervised subword modeling: a contribution to zerospeech 2017. In ASRU, Cited by: Table 1, Table 4.
  12. G. Heigold, V. Vanhoucke, A. Senior, P. Nguyen, M. A. Ranzato, M. Devin and J. Dean (2013) Multilingual acoustic models using distributed deep neural networks. In ICASSP, Cited by: §1, §2.1.
  13. R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler and Y. Bengio (2018) Learning deep representations by mutual information estimation and maximization. arXiv:1808.06670. Cited by: §2.2.
  14. J.-T. Huang, J. Li, D. Yu, L. Deng and Y. Gong (2013) Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In ICASSP, Cited by: §1, §2.1.
  15. A. Hyvarinen and H. Morioka (2016) Unsupervised feature extraction by time-contrastive learning and nonlinear ica. In NIPS, Cited by: §1, §2.2.
  16. S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167. Cited by: §3.2.1.
  17. J. Kahn, M. Rivière, W. Zheng, E. Kharitonov, Q. Xu, P.-E. Mazaré, J. Karadayi, V. Liptchinsky, R. Collobert, C. Fuegen, T. Likhomanenko, G. Synnaeve, A. Joulin, A. Mohamed and E. Dupoux (2020) Libri-light: a benchmark for asr with limited or no supervision. In INTERSPEECH, External Links: 1912.07875 Cited by: §S2, §5.2, §6.
  18. K. Kawakami, L. Wang, C. Dyer, P. Blunsom and A. van den Oord (2020) Learning robust and multilingual speech representations. External Links: 2001.11128 Cited by: §6.
  19. X. Li, S. Dalmia, A. W. Black and F. Metze (2019) Multilingual speech recognition with corpus relatedness sampling. arXiv:1908.01060. Cited by: §2.1.
  20. T. Mikolov, I. Sutskever, K. Chen, G. Corrado and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NIPS, Cited by: §2.2.
  21. T. Schatz, V. Peddinti, F. Bach, A. Jansen, H. Hermansky and E. Dupoux (2013) Evaluating speech features with the minimal-pair abx task: analysis of the classical mfc/plp pipeline. INTERSPEECH. Cited by: §4.0.3.
  22. S. Schneider, A. Baevski, R. Collobert and M. Auli (2019) Wav2vec: unsupervised pre-training for speech recognition. arXiv:1904.05862. Cited by: §1, §1, §2.2, §5.2.
  23. F. Schroff, D. Kalenichenko and J. Philbin (2015) Facenet: a unified embedding for face recognition and clustering. In CVPR, Cited by: §2.2.
  24. T. Schultz and A. Waibel (2001) Language-independent and language-adaptive acoustic modeling for speech recognition. Speech Communication. Cited by: §2.1.
  25. A. Stolcke, F. Grezl, M.-Y. Hwang, X. Lei, N. Morgan and D. Vergyri (2006) Cross-domain and cross-language portability of acoustic features estimated by multilayer perceptrons. In ICASSP, Cited by: §2.1.
  26. Y. Tian, D. Krishnan and P. Isola (2019) Contrastive multiview coding. arXiv:1906.05849. Cited by: §2.2.
  27. A. van den Oord, Y. Li and O. Vinyals (2018) Representation learning with contrastive predictive coding. arXiv:1807.03748. Cited by: §S1.2, Table S1, §1, §1, §S1, §2.2, §3.1, §3.1, §3, §4.0.1, §5.1, Table 1, Table 2, Table 3, Table 4, Table 5.
  28. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin (2017) Attention is all you need. In NIPS, Cited by: §3.2.2.
  29. K. Veselỳ, M. Karafiát, F. Grézl, M. Janda and E. Egorova (2012) The language-independent bottleneck features. In SLT, Cited by: §1, §2.1.
  30. K. Q. Weinberger and L. K. Saul (2009) Distance metric learning for large margin nearest neighbor classification. JMLR. Cited by: §2.2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
407063
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description