Syllable-Based Sequence-to-Sequence Speech Recognition with the Transformer in Mandarin Chinese

Syllable-Based Sequence-to-Sequence Speech Recognition with the Transformer in Mandarin Chinese

Abstract

Sequence-to-sequence attention-based models have recently shown very promising results on automatic speech recognition (ASR) tasks, which integrate an acoustic, pronunciation and language model into a single neural network. In these models, the Transformer, a new sequence-to-sequence attention-based model relying entirely on self-attention without using RNNs or convolutions, achieves a new single-model state-of-the-art BLEU on neural machine translation (NMT) tasks. Since the outstanding performance of the Transformer, we extend it to speech and concentrate on it as the basic architecture of sequence-to-sequence attention-based model on Mandarin Chinese ASR tasks. Furthermore, we investigate a comparison between syllable based model and context-independent phoneme (CI-phoneme) based model with the Transformer in Mandarin Chinese. Additionally, a greedy cascading decoder with the Transformer is proposed for mapping CI-phoneme sequences and syllable sequences into word sequences. Experiments on HKUST datasets demonstrate that syllable based model with the Transformer performs better than CI-phoneme based counterpart, and achieves a character error rate (CER) of , which is competitive to the state-of-the-art CER of by the joint CTC-attention based encoder-decoder network.

\name

Shiyu Zhou, Linhao Dong, Shuang Xu, Bo Xu \address Institute of Automation, Chinese Academy of Sciences
University of Chinese Academy of Sciences \email{zhoushiyu2013, donglinhao2015, shuang.xu, xubo}@ia.ac.cn

Index Terms: ASR, multi-head attention, syllable based acoustic modeling, sequence-to-sequence

1 Introduction

Experts have shown significant interest in the area of sequence-to-sequence modeling with attention [1, 2, 3, 4] on ASR tasks in recent years. Sequence-to-sequence attention-based models integrate separate acoustic, pronunciation and language models of a conventional ASR system into a single neural network [5] and do not make the conditional independence assumptions as in standard hidden Markov based model [6].

Sequence-to-sequence attention-based models are commonly comprised of an encoder, which consists of multiple recurrent neural network (RNN) layers that model the acoustics, and a decoder, which consists of one or more RNN layers that predict the output sub-word sequence. An attention layer acts as the interface between the encoder and the decoder: it selects frames in the encoder representation that the decoder should attend to in order to predict the next sub-word unit [5]. However, RNNs maintain a hidden state of the entire past that prevents parallel computation within a sequence. In order to reduce sequential computation, the model architecture of the Transformer has been proposed in [7]. This model architecture eschews recurrence and instead relies entirely on an attention mechanism to draw global dependencies between input and output, which allows for significantly more parallelization and achieves a new single-model state-of-the-art BLEU on NMT tasks [7]. Since the outstanding performance of the Transformer, this paper focuses on it as the basic architecture of sequence-to-sequence attention-based model on Mandarin Chinese ASR tasks.

Recently various modeling units of sequence-to-sequence attention-based models have been studied on English ASR tasks, such as graphemes, CI-phonemes, context-dependent phonemes and word piece models [1, 5, 8]. However, few related works have been explored by sequence-to-sequence attention-based models on Mandarin Chinese ASR tasks. As we known, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. These syllables have a fixed number (around 1400 pinyins with tones are used in this work) and each written character corresponds to a syllable. In addition, syllables are a longer linguistic unit, which reduces the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models. Moreover, syllables have the advantage of avoiding out-of-vocabulary (OOV) problem.

Due to these advantages of syllables, we are concerned with syllables as the modeling unit in this paper and investigate a comparison between CI-phoneme based model and syllable based model with the Transformer on Mandarin Chinese ASR tasks. Moreover, Since we investigate the comparison between CI-phonemes and syllables, these CI-phoneme sequences or syllable sequences from the Transformer have to be converted into word sequences for the performance comparison in terms of CER. The conversion from CI-phoneme sequences or syllable sequences to word sequences can be regarded as a sequence-to-sequence task, which is modeled by the Transformer in this paper. Then we propose a greedy cascading decoder with the Transformer to maximize the posterior probability approximately. Experiments on HKUST datasets reveal that the Transformer performs very well on Mandarin Chinese ASR tasks. Moreover, we experimentally confirm that syllable based model with the Transformer can outperform CI-phoneme based counterpart, and achieve a CER of , which is competitive to the state-of-the-art CER of by the joint CTC-attention based encoder-decoder network [9].

The rest of the paper is organized as follows. After an overview of the related work in Section 2, Section 3 describes the proposed method in detail. we then show experimental results in Section 4 and conclude this work in Section 5.

2 Related work

Sequence-to-sequence attention-based models have shown very encouraging results on English ASR tasks [1, 8, 10]. However, it is quite difficult to apply it to Mandarin Chinese ASR tasks. In [11], Chan et al. proposed Character-Pinyin sequence-to-sequence attention-based model on Mandarin Chinese ASR tasks. The Pinyin information was only used during training for improving the performance of the character model. Instead of using joint Character-Pinyin model, [12] directly used Chinese characters as network output by mapping the one-hot character representation to an embedding vector via a neural network layer.

In this paper, we are concerned with syllables as the modeling unit. Acoustic models using syllables as the modeling unit have been investigated for a long time [13, 14, 15]. Ganapathiraju et al. have first shown that syllable based acoustic models can outperform context dependent phone based acoustic models with GMM [14]. Wu et al. experimented on syllable based context dependent Chinese acoustic model and discovered that context dependent syllable based acoustic models can show promising performance [15]. Qu et al. [13] explored the CTC-SMBR-LSTM using syllables as outputs and verified that syllable based CTC model can perform better than CI-phoneme based CTC model on Mandarin Chinese ASR tasks. Inspired by [13], we extend their work from CTC based models to sequence-to-sequence attention-based models.

Using syllables as the modeling unit, it is natural to consider the conversion from Chinese syllable sequences to Chinese word sequences as a task of labelling unsegmented sequence data. Liu et al. [16] proposed RNN based supervised sequence labelling method with CTC algorithm to achieve a direct conversion from syllable sequences to word sequences.

3 System overview

3.1 Transformer model

The Transformer model architecture is the same as sequence-to-sequence attention-based models except relying entirely on self-attention and position-wise, fully connected layers for both the encoder and decoder [2]. The encoder maps an input sequence of symbol representations x = to a sequence of continuous representations z = . Given z, the decoder then generates an output sequence y = of symbols one element at a time.

Multi-head attention

An attention function maps a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key [2]. Scaled dot-product attention is adopted as the basic attention function in the Transformer, which describes (1):

(1)

Where the dimension of query Q and key K are the same, which are d, and dimension of value V is d.

Instead of performing a single attention function, the Transformer employs the multi-head attention (MHA) which projects the queries, keys and values times with different, learned linear projections to , and dimensions. On each of these projected versions of queries, keys and values, the basic attention function is performed in parallel, yielding -dimensional output values. These are concatenated and projected again, resulting in the final values. The equations can be represented as follows [2]:

(2)
(3)

Where the projections are parameter matrices , , , , is the number of heads, and is the model dimension.

MHA behaves like ensembles of relatively small attentions to allow the model to jointly attend to information from different representation subspaces at different positions, which is beneficial to learn complicated alignments between the encoder and decoder.

Figure 1: The architecture of the ASR Transformer.

Transformer model architecture

The architecture of the ASR Transformer is shown in Figure 1, which stacks MHA and position-wise, fully connected layers for both the encode and decoder. The encoder is composed of a stack of identical layers. Each layer has two sub-layers. The first is a MHA, and the second is a position-wise fully connected feed-forward network. Residual connections are employed around each of the two sub-layers, followed by a layer normalization. The decoder is similar to the encoder except inserting a third sub-layer to perform a MHA over the output of the encoder stack. To prevent leftward information flow and preserve the auto-regressive property in the decoder, the self-attention sub-layers in the decoder mask out all values corresponding to illegal connections. In addition, positional encodings [2] are added to the input at the bottoms of these encoder and decoder stacks, which inject some information about the relative or absolute position in the sequence to make use of the order of the sequence.

Since our ASR experiments use 80-dimensional log-Mel filterbank features, we explore a linear transformation with a layer normalization to convert the input dimension to the model dimension for dimension matching, which is marked out by a dotted line in Figure 1.

3.2 Greedy cascading decoder with the Transformer

Due to syllables and CI-phonemes are investigated in this paper, the CI-phoneme sequences or syllable sequences have to be converted into word sequences using a lexicon during beam-search decoding.

The speech recognition problem can be defined as the problem of finding word sequence W that maximizes posterior probability given observation , and can transform as follows [17].

(4)
(5)
(6)

Here, is the probability from observation X to sub-word unit sequence , is the the probability from sub-word unit sequence to word sequence .

According to equation (6), we propose that both and can be regarded as sequence-to-sequence transformations, which can be modeled by sequence-to-sequence attention-based models, specifically the Transformer is used in the paper.

Then, the greedy cascading decoder with the Transformer is proposed to directly estimate equation (6). First, the best sub-word unit sequence is calculated by the Transformer from observation to sub-word unit sequence with beam size . And then, the best word sequence is chosen by the Transformer from sub-word unit sequence to word sequence with beam size . Through cascading two sequence-to-sequence attention-based models, we assume that equation (6) can be approximated.

In this work we employ and .

4 Experiment

4.1 Data

The HKUST corpus (LDC2005S15, LDC2005T32), a corpus of Mandarin Chinese conversational telephone speech, is collected and transcribed by Hong Kong University of Science and Technology (HKUST) [18], which contains 150-hour speech, and 873 calls in the training set and 24 calls in the test set. All experiments are conducted using 80-dimensional log-Mel filterbank features, computed with a 25ms window and shifted every 10ms. The features are normalized via mean subtraction and variance normalization on the speaker basis. Similar to [19, 20], at the current frame , these features are stacked with 3 frames to the left and downsampled to a 30ms frame rate.

4.2 Training

We perform our experiments on the base model and big model (i.e. D512-H8 and D1024-H16 respectively) of the Transformer from [7]. The basic architecture of these two models is the same but different parameters setting. Table 1 lists the experimental parameters between these two models. The Adam algorithm [21] with gradient clipping and warmup is used for optimization. During training, label smoothing of value is employed [22].

model D512-H8 D1024-H16

Table 1: Experimental parameters configuration.

First, for the Transformer from observation to sub-word unit sequence, CI-phonemes without silence (phonemes with tones) are employed in the CI-phoneme based experiments and syllables (pinyins with tones) in the syllable based experiments. Extra tokens (i.e. an unknown token (<UNK>), a padding token (<PAD>), and sentence start and end tokens (<S>/<\S>)) are appended to the outputs, making the total number of outputs and respectively in the CI-phoneme based model and syllable based model. Second, for the Transformer from sub-word unit sequence to word sequence, we collect all words from the training data together with appended extra tokens and the total number of outputs is . In our experiments, we only train the Transformer from sub-word unit sequence to word sequence by the base model.

Standard tied-state cross-word triphone GMM-HMMs are first trained with maximum likelihood estimation to generate CI-phoneme alignments on training set and test set for handling multiple pronunciations of the same word in Mandarin Chinese. we then generate syllable alignments through these CI-phoneme alignments according to the lexicon. Finally, we proceed to train the Transformer with these alignments.

In order to verify the effectiveness of the greedy cascading decoder proposed in this paper, the CI-phoneme and syllable alignments on test data are converted into word sequences using the trained models. We can get a CER of on the CI-phoneme based model and on the syllable based model respectively, which are the lower bounds of our experiments. If sub-word unit sequences, calculated by the Transformer from observation to sub-word unit sequence , can approximate to these corresponding alignments, our experimental results can approach the lower bounds using the greedy cascading decoder.

Figure 2 visualizes the self-attention alignments in the encoder layer and the vanilla attention alignments in the encoder-decoder layer by Tensorflow [23]. As can be seen in the figure, both self-attention matrix and vanilla attention matrix appear very localized, which let us to understand how changing the attention window influences the CER.

Figure 2: Self-attention (top) of encoder-encoder that both x-axis and y-axis represent input frames. Vanilla attention (bottom) of encoder-decoder that the x-axis represents input frames and y-axis corresponds to output labels.

4.3 Results of CI-phoneme and syllable based model

Our results are summarized in Table 2. As can be seen in the table, CI-phoneme and syllable based model with the Transformer can achieve competitive results on HKUST datasets in terms of CER. It reveals that the Transformer is very suitable for the ASR task since its powerful sequence modeling capability, although it relies entirely on self-attention without using RNNs or convolutions. Furthermore, we note here that the CER of syllable based model outperforms that of corresponding CI-phoneme based model. The results suggest that the sub-word unit of syllables is a better modeling unit in sequence-to-sequence attention-based models on Mandarin Chinese ASR tasks compared to the sub-word unit of CI-phonemes. It validates the conclusion proposed on CTC based model [13]. Finally, it is obvious that the big model always performs better than the base model no matter on the CI-phoneme based model or syllable based model. Therefore, our further experiments are conducted on the big model.

We further generate more training data by linearly scaling the audio lengths by factors of and (speed perturb.) [9]. It can be observed that syllable based model with speed perturb becomes better and achieves the best CER of compared to without it. However, CI-phoneme based model with speed perturb becomes very slightly worse than without it. The interpretation of this phenomenon is that syllables have a longer duration and more invariance than CI-phonemes, so small speed perturb would not affect the pronuciation of syllables too much, instead of providing more useful and various training data. However, small speed perturb might have more impact on the pronuciation of CI-phonemes due to the short duration.

sub-word unit model CER CI-phonemes D512-H8 D1024-H16 30.65 D1024-H16 (speed perturb) Syllables D512-H8 D1024-H16 D1024-H16 (speed perturb) 28.77

Table 2: Comparison of CI-phoneme and syllable based model with the Transformer on HKUST datasets in CER (%).

4.4 Comparison with previous works

In Table 3, we compare our experimental results to other model architectures from the literature on HKUST datasets. First, we can find that the result of CI-phoneme based model with the Transformer is comparable to the best result by the deep multidimensional residual learning with 9 LSTM layers in hybrid system [24], and the syllable based model with the Transformer provides over a relative improvement in CER compared to it. Moreover, the CER of syllable based model with the Transformer is comparable to the CER by the joint CTC-attention based encoder-decoder network [9] when no external language model is used, but slightly worse than the CER by the joint CTC-attention based encoder-decoder network with separate RNN-LM, which is the state-of-the-art on HKUST datasets to the best of our knowledge.

model CER LSTMP-9800P512-F444 [24] CTC-attention+joint dec. (speed perturb., one-pass) +VGG net +RNN-LM (separate) [9] 28.0 CI-phonemes-D1024-H16 Syllables-D1024-H16 (speed perturb) 28.77

Table 3: CER (%) on HKUST datasets compared to previous works.

4.5 Comparison of different frame rates

Finally, table 4 compares different frame rates on CI-phoneme and syllable based model with the Transformer. It indicates that the performance of CI-phoneme and syllable based model with the Transformer decreases as the frame rate increases. The decreasing rate is relatively slow from ms to ms, but deteriorates rapidly from ms to ms. Thus, it shows that frame rate between ms and ms performs relatively well on CI-phoneme and syllable based model with the Transformer.

model frame rate CER CI-phonemes-D1024-H16 (speed perturb) 30.72 Syllables-D1024-H16 (speed perturb) 28.77

Table 4: Comparison of different frame rates on HKUST datasets in CER (%).

5 Conclusions

In this paper we applied the Transformer, a new sequence transduction model based entirely on self-attention without using RNNs or convolutions, to Mandarin Chinese ASR tasks and verified its effectiveness on HKUST datasets. Furthermore, we compared syllables and CI-phonemes as the modeling unit in sequence-to-sequence attention-based models with the Transformer in Mandarin Chinese. Our experimental results demonstrated that syllable based model with the Transformer performs better than CI-phoneme based counterpart on HKUST datasets. What is more, a greedy cascading decoder with the Transformer is proposed to maximize and then posterior probability can be maximized. Experimental results on CI-phoneme and syllable based model verified the effectiveness of the greedy cascading decoder.

6 Acknowledgements

The authors would like to thank Chunqi Wang for insightful discussions on training and tuning the Transformer.

References

  1. C.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, K. Gonina et al., “State-of-the-art speech recognition with sequence-to-sequence models,” arXiv preprint arXiv:1712.01769, 2017.
  2. J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in Advances in neural information processing systems, 2015, pp. 577–585.
  3. D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Bengio, “End-to-end attention-based large vocabulary speech recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on.   IEEE, 2016, pp. 4945–4949.
  4. W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, “Listen, attend and spell. arxiv preprint,” arXiv preprint arXiv:1508.01211, vol. 1, no. 2, p. 3, 2015.
  5. R. Prabhavalkar, T. N. Sainath, B. Li, K. Rao, and N. Jaitly, “An analysis of ¡°attention¡± in sequence-to-sequence models,¡±,” in Proc. of Interspeech, 2017.
  6. H. A. Bourlard and N. Morgan, Connectionist speech recognition: a hybrid approach.   Springer Science & Business Media, 2012, vol. 247.
  7. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp. 6000–6010.
  8. R. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, “A comparison of sequence-to-sequence models for speech recognition,” in Proc. Interspeech, 2017, pp. 939–943.
  9. T. Hori, S. Watanabe, Y. Zhang, and W. Chan, “Advances in joint ctc-attention based end-to-end speech recognition with a deep cnn encoder and rnn-lm,” arXiv preprint arXiv:1706.02737, 2017.
  10. Y. Zhang, W. Chan, and N. Jaitly, “Very deep convolutional networks for end-to-end speech recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on.   IEEE, 2017, pp. 4845–4849.
  11. W. Chan and I. Lane, “On online attention-based speech recognition and joint mandarin character-pinyin training.” in INTERSPEECH, 2016, pp. 3404–3408.
  12. C. Shan, J. Zhang, Y. Wang, and L. Xie, “Attention-based end-to-end speech recognition on voice search.”
  13. Z. Qu, P. Haghani, E. Weinstein, and P. Moreno, “Syllable-based acoustic modeling with ctc-smbr-lstm,” 2017.
  14. A. Ganapathiraju, J. Hamaker, J. Picone, M. Ordowski, and G. R. Doddington, “Syllable-based large vocabulary continuous speech recognition,” IEEE Transactions on speech and audio processing, vol. 9, no. 4, pp. 358–366, 2001.
  15. H. Wu and X. Wu, “Context dependent syllable acoustic model for continuous chinese speech recognition,” in Eighth Annual Conference of the International Speech Communication Association, 2007.
  16. Y. Liu, J. Hua, X. Li, T. Fu, and X. Wu, “Chinese syllable-to-character conversion with recurrent neural network based supervised sequence labelling,” in Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2015 Asia-Pacific.   IEEE, 2015, pp. 350–353.
  17. N. Kanda, X. Lu, and H. Kawai, “Maximum a posteriori based decoding for ctc acoustic models.” in Interspeech, 2016, pp. 1868–1872.
  18. Y. Liu, P. Fung, Y. Yang, C. Cieri, S. Huang, and D. Graff, “Hkust/mts: A very large scale mandarin telephone speech corpus,” in Chinese Spoken Language Processing.   Springer, 2006, pp. 724–735.
  19. H. Sak, A. Senior, K. Rao, and F. Beaufays, “Fast and accurate recurrent neural network acoustic models for speech recognition,” arXiv preprint arXiv:1507.06947, 2015.
  20. A. Kannan, Y. Wu, P. Nguyen, T. N. Sainath, Z. Chen, and R. Prabhavalkar, “An analysis of incorporating an external language model into a sequence-to-sequence model,” arXiv preprint arXiv:1712.01996, 2017.
  21. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  22. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826.
  23. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
  24. Y. Zhao, S. Xu, and B. Xu, “Multidimensional residual learning based on recurrent neural networks for acoustic modeling,” Interspeech 2016, pp. 3419–3423, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
169257
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description