End-to-End Spoken Language Translation

End-to-End Spoken Language Translation

Michelle Guo       Albert Haque   Prateek Verma
Technical Report
Stanford University (2017)
  This technical report was originally done in 2017 as a course project and can be found online at Stanford University here. It was uploaded to arXiv in 2019 for archival purposes.
Abstract

In this paper, we address the task of spoken language understanding. We present a method for translating spoken sentences from one language into spoken sentences in another language. Given spectrogram-spectrogram pairs, our model can be trained completely from scratch to translate unseen sentences. Our method consists of a pyramidal-bidirectional recurrent network combined with a convolutional network to output sentence-level spectrograms in the target language. Empirically, our model achieves competitive performance with state-of-the-art methods on multiple languages and can generalize to unseen speakers.

\AtBeginShipout

=\AtBeginShipoutBox\AtBeginShipoutBox

End-to-End Spoken Language Translation


Michelle Guo       Albert Haque   Prateek Verma Technical Reportthanks:   This technical report was originally done in 2017 as a course project and can be found online at Stanford University here. It was uploaded to arXiv in 2019 for archival purposes. Stanford University (2017)

1 Introduction

Humans are able to seamlessly process different language representations despite syntactic, acoustic, and semantic variations. Recent studies from the neurolinguistics literature suggest that humans have distinct submodules for different linguistic functions (Hickok and Poeppel, 2007). One could suspect word-level and sentence-level processing may occur in entirely different cortical regions (Embick et al., 2000). One theory for acoustic-language invariance posits that different languages do activate different submodules but instead of being linked to proficiency, these submodules are linked to general language processing (Newman et al., 2012).

Figure 1: Our goal is to translate spoken sentences from one language to another, end-to-end.

Inspired by humans, modern machine translation systems often use a word-level model to aid in the translation process (Luong et al., 2015; Zou et al., 2013). In the case of text-based translation, learned word vectors or one-hot embeddings are the primary means of representing natural language (Pennington et al., 2014; Mikolov et al., 2013). For speech and acoustic inputs however, word or phone embeddings are often used as a training convenience to provide multiple sources of information gradient flow to the model (Bourlard and Morgan, 1990; Wilpon et al., 1990). Spectrograms remain the dominant acoustic representation for both phoneme and word-level tasks since the high sampling rate and dimensionality of waveforms is difficult to model (van den Oord et al., 2016).

Intuitively, sentence-level and phrase-level representations seem to be a more powerful modeling tool for some tasks (Wilson et al., 2005). However, the work on sentence- and phrase-level features in the context of automatic speech recognition and machine translation has been limited. The primary challenge is the ambiguities and combinatorics associated with long-term temporal dependencies, especially for longer sentences containing ten or twenty words in each language (Xu et al., 2015; Venugopalan et al., 2014). Another challenge is limited data: there are no publicly available spoken language translation datasets that contain multiple speakers and multiple languages.

In this paper, we address the task of spoken language understanding, distinct from speech recognition (speech-to-text) and speech synthesis (text-to-speech). Specifically, we propose a method for end-to-end111Although the output of our model is not a waveform, we follow the definitions from Wang et al. (2017). spoken language translation. The input to our model is an acoustic sentence in one language and the output is an acoustic representation of the same sentence translated into a different language. Given the importance of the temporal speech cortex in the human brain (Geschwind and Levitsky, 1968), the foundation of our model is a recurrent network. Our method operates on spectrograms and combines the Listen-Attend-Spell model (Chan et al., 2016) with the TacoTron (Wang et al., 2017) architecture.

Our contributions are as follows. First, we propose a fully differentiable sentence-level acoustic model for translating sentences across languages. Empirically, we show the success of our model through various ablation studies and subjective evaluation tests. Second, we collect and publicly release a dataset containing twelve speakers uttering words from Spanish, Mandarin, Hindi, and English. Our dataset collection technique allows for efficient audio acquisition and programmatic generation of grammatically correct spoken sentences. This is an efficient and scalable way to collect large datasets for our spoken language translation task.

2 Related Work

Speech-to-Text. Also known as speech recognition, the task of speech-to-text is to accept as input an audio signal and output a text-based representation of the words spoken in the input. Recent work in speech recognition often employs Hidden Markov Models (HMMs) or deep networks.

HMM-based methods (Bahl et al., 1986) attempt to model sequences of acoustic features extracted from audio input. One of the simplest features are mel-frequency cepstral coefficients (MFCCs) (Logan et al., 2000) and attempt to model the responses in the human ear. Using the independence assumption, HMMs model words by analyzing a small temporal context window of acoustic features (Levinson, 1986). Hidden Markov models have been successful not only in speech recognition but also emotion recognition (Schuller et al., 2003; Nwe et al., 2003), gender classification (Konig and Morgan, 1992), and accent classification (Arslan and Hansen, 1996).

Deep learning methods generally make use of recurrent networks. One such example is the Connectionist Temporal Classification for speech recognition (Graves et al., 2006; Graves and Jaitly, 2014). More recently, attention-based models (Xu et al., 2015) such as Listen, Attend, and Spell (Chan et al., 2016) proposed a system using a sequence-to-sequence model with attention.

Text-to-Speech. Also known as speech synthesis, text-to-speech (TTS) systems have just recently started to show promising results. It has been shown that a pre-trained HMM combined with a sequence-to-sequence model can learn appropriate alignments (Wang et al., 2016). Unfortunately this is not end-to-end as it predicts vocoder parameters and it is unclear how much performance is gained from the HMM aligner. Char2wav (Sotelo et al., 2017) is another method, trained end-to-end on character inputs to produce audio but also predicts vocoder parameters.

DeepVoice (Arik et al., 2017) improves on this by replacing nearly all components in a standard TTS pipeline with neural networks. While this is closer towards a fully differentiable solution, each component is trained in isolation with different optimization objectives. WaveNet (van den Oord et al., 2016) is a powerful generative model of audio and generates realistic output speech. However it is slow due to audio sample-level autoregressive nature and requires domain-specific linguistic feature engineering.

Figure 2: Overview of our model. The inputs and outputs of our model are spectrograms. We use a convolutional network to encode temporal context windows in the input. Our encoder consists of a pyramidal bidirectional RNN which captures acoustic features and multiple levels of temporal resolution. The decoder is equipped with attention and outputs multiple spectrogram slices at a time.

Most similar to our work is Tacotron (Wang et al., 2017). In this work, the authors move even closer to a fully differentiable system. The input to Tacotron is a sequence of character embeddings and the output is a linear-scale spectrogram. After applying Griffin-Lim phase reconstruction (Nawab et al., 1983), the waveform is generated.

Text-Based Machine Translation. State-of-the-art machine translation methods generally come from a family of encoder-decoder, or sequence-to-sequence, models where the input is encoded into a fixed-length representation and is then decoded into the target sequence (Cho et al., 2014; Sutskever et al., 2014). In (Cho et al., 2014), a recurrent network was used as an encoder-decoder model for machine translation. While the presented results were positive, Bahdanau et al. argued that a fixed-length representation is a bottleneck during learning (Bahdanau et al., 2014). They proposed the use of an attention mechanism (Luong et al., 2015; Xu et al., 2015) to improve performance. Following this trend of attention-based sequence-to-sequence models, we now propose a model for end-to-end spoken language translation.

3 Method

Our goal is end-to-end spoken language translation. Given an input spectrogram of a sentence spoken in one language, our model outputs a spectrogram of the same sentence spoken in a different language. The backbone of our model is a sequence-to-sequence architecture with attention (Bahdanau et al., 2014; Vinyals et al., 2015). Figure 2 shows our model which consists of a convolutional network fuse temporal context, a pyramidal bidirectional encoder to capture acoustic features at multiple levels of temporal granularity, and a decoder with attention. The model is fully differntiable and can be trained with modern optimization methods.

3.1 Encoder

Convolutional Network. The first step of our model is to learn an appropriate representation of the spectrogram input. In its original form, the input consists of on the order of hundreds of timesteps. For longer sentences, this can easily expand to the order of thousands. Each timestep is associated with a feature vector, also containing hundreds of real values.

Modeling the full spectrogram would require unrolling of the encoder RNN for an infeasibily large number of timesteps (Sainath et al., 2015). Even with truncated backpropagation through time (Haykin et al., 2001), this would be a challenging task on large datasets. Inspired by the Convolutional, Long Short-Term Memory Deep Neural Network (CLDNN) (Sainath et al., 2015) approach, we use a convolutional network to reduce the temporal length of the input by using a learned convolutional filter bank. The stride, or hop size, controls the degree of length reduction.

Pyramidal Bidirectional Recurrent Network. Many successful uses of RNNs have been on sentence-level outputs such as machine translation (Bahdanau et al., 2014) and image captioning (Karpathy and Fei-Fei, 2015). The outputs are generally on the order of tens of timesteps. Given the filter activations from our convolutional network, the input is of a smaller dimensionality and can be fed into our encoder network, however, it can still have hundreds of timesteps. In Listen, Attend, and Spell (LAS) (Chan et al., 2016), it was shown that a multi-layer RNN struggles to extract appropriate representations from the input due to the large number of timesteps.

Inspired by the Clockwork RNN (Koutnik et al., 2014), we use a pyramidal RNN to address the issue of learning from a large number of timesteps (Chan et al., 2016). A pyramidal RNN is the same as a standard multi-layer RNN but instead of each layer simply accepting the input from the previous layer, successively higher layers in the network only compute, or “tick,” during particular timesteps. This allows different layers of the RNN to operate at different temporal scales. Formally, let denote the hidden state of a bidirectional LSTM (BLSTM) at the -th timestep of the -th layer:

(1)

For a pyramidal BLSTM, the outputs from the lower layers, which contain high-resolution temporal information, are concatenated:

(2)

In (2), the output of a pBLSTM unit is now a function of not only its previous hidden state, but also the outputs from two timesteps from the layer below. In LAS and our method, we reduce the time resolution by a factor of two for each layer.

Not only does the pyramidal RNN provide higher-level temporal features, but it also reduces the inference complexity. Only the first layer processes each input timestep as opposed to all layers.

3.2 Decoder

Attention. Learning long-range temporal dependencies can be challenging (Bengio et al., 1994). To aid this process, we use an attention-based LSTM transducer (Chorowski et al., 2015). Specifically, we selected a tanh attention mechanism (Bahdanau et al., 2014).

At each timestep, the transducer produces a probability distribution over the next character conditioned on all the previously seen inputs. The distribution for is a function of the decoder state and context . The decoder state is a function of the previous state , the previously emitted character and context . The context vector is produced by an attention mechanism (Chan et al., 2016). Specifically, we define:

(3)

where attention is defined as the alignment between the current decoder frame and a frame from the encoder input:

(4)

and where the score between the output of the encoder or the hidden states, , and the previous state of the decoder cell, is computed with: where and are sub-networks, e.g. multi-layer perceptrons:

(5)

for with learnable parameters , and .

Multiple Output Prediction. The decoder faces similar timestep challenges as the input. Because we are predicting entire sentences, the number of output timesteps is on the order of hundreds to thousands. We follow a method proposed by Tacotron (Wang et al., 2017) to remedy this issue. Specifically, at each decoder timestep, we predict spectrogram slices. This reduces the number of output timesteps by a factor of . Other benefits include reduced number of parameters (see Table 1), faster training time, and better spectrogram reconstruction performance.

Spectrogram to Waveform. Our model predicts spectrogram magnitudes only. To produce a waveform, we need both the magnitude and the phase components. Since our model does not predict phase, we use our predicted magnitude and apply a Griffin-Lim phase recovery (Nawab et al., 1983) to generate the final waveform of the translated sentence.

3.3 Optimization

Due to the highly complex nature of our method, we assessed the performance of multiple loss objectives in isolation and in parallel. Because the output of our model is a spectrogram, we use the standard L2 loss:

(6)

where denotes the ground truth spectrogram for training example and denotes the predicted spectrogram for example .

Figure 3: Concatenative sentence generation. Recordings were collected of multiple people speaking individual words in different languages. Using fixed vocabulary size, we selected real-world text sentences from public text corpora. We then programatically constructed full spoken sentences by concatenating the individual recorded words.

We also experimented with the Kullback-Leibler divergence as our optimization criterion:

(7)

where and denote the predicted and ground truth output distributions, respectively.

In cases where the dimensionality is high, direct regression may be unable to efficiently learn (van den Oord et al., 2016). One solution is to quantize the output and treat the task as a classification problem (Oehler and Gray, 1995). Instead of predicting real-valued outputs corresponding to the output spectrogram, we quantize the ground truth label into bins and optimze the model as a classification task.

(8)

where and denote the predicted and ground truth output distributions, respectively. An added benefit is that we can push down the probability of incorrect bins while simultaneously increasing the probability of the correct output.

The KL-divergence loss, L2 loss, and cross-entropy loss were weighted according to mixing coefficients , , and .

4 Experiments

Our goal is end-to-end spoken language translation. In general, end-to-end models require large amounts of training data. Because we collected our own dataset, we must efficiently collect a large number of training examples. This was done by concatenatively generating sentences from real-world recordings of individual words.

4.1 Pearl Dataset

We collect a new spoken language translation dataset, titled Pearl. The dataset consists of grammatically correct input and output sentences spoken by the same speaker. The input is spoken in a different language from the output. A total of twelve speakers (8 male, 4 female) contributed to the dataset. The languages include Hindi, Mandarin, English, and Spanish.

Vocabulary. First, a pre-determined vocabulary of 100 English words was determined such that the vocabulary list contains words from multiple grammatical categories (e.g., noun, verb). Each speaker was instructed to speak one word at a time from the vocabulary list. This can be repeated multiple times per word to collect diverse pitch tracks and intonation.

Figure 4: Ablation Study of Model Components (ZH to EN). Various methods show an improvement in the reconstruction error on top of the base model. The above plot serves as a powerful debugging tool for analyzing each component of our model. The base model fluctuates during training and maintains the highest error.

Concatenative Generation. First, sentences were constructed with a concatenative model using real-world sentences from the Facebook bAbI project (Li et al., 2016). We constructed single-word inputs, bigrams, trigrames, and sentences consisting of five to ten words. Additionally, we extracted single words and single phones from the TIMIT dataset (Garofolo et al., 1993). These words were similarly concatenated to construct English sentences

Second, the vocabulary list was translated to other languages using the Google Translate API. Each speaker was asked to speak words from a pre-determined vocabulary list consisting of 100 English words. The English words were translated into other languages using the Google Translate API and were manually verified by our native speakers. Figure 3 shows an overview of this process.

4.2 Comparison with State-of-the-Art

We performed ablation studies to assess the effect of the various components of our model when equipped with state-of-the-art modules. Starting with a base encoder-decoder model, we introduce a single method. The results of each model is shown in Figure 1. Because we individually evaluated the performance of each state-of-the-art method in isolation, we can better understand our model’s performance improvements. We show the changes in parameter size as well as the model’s spectrogram reconstruction error using mean-squared error across the different methods.

Method # Parameters Error
Base Model
KL Loss
Pyramid
Attention
Bidirectional
CNN
DMO-3
Table 1: Comparison of state-of-the-art methods. Each row denotes a base model with only the method applied (i.e., one model component). KL loss was applied to the base model. DMO-3 denotes decoder multi-output predicting 3 timesteps at a time. Reconstruction error is used as error.

Different state-of-the-art methods provide performance benefits to varying degrees. For example, DMO-3 exhibited the largest improvement. Not only does the DMO-3 model have fewer parameters, it can train faster. The convolutional network during the encoder stage also provides additional benefits, likely due to the learned convolutional kernels to better capture temporal context and reduce the number of timesteps.

Figure 5: Results on an unseen speaker for bigrams and trigrams. (Top row) Input English spectrogram and sentence. (Middle row) Predicted Mandarin spectrogram. (Bottom row) Ground truth Mandarin spectrogram and sentence.

4.3 Qualitative Results

Figure 5 shows the input, predicted, and ground truth spectrogram for both bigram and trigram inputs for our full model equipped with all components in Table 1. The input spectrogram is a speaker not present in the training set. Figure 5 represents our toughest experiment for our model; but the results are positive. Our model is able to successfully generate rib-like patterns in the spectrogram. Even for the trigram case, we are able to see three distinct words, delimited by silence.

4.4 Learned Word Embeddings

Internally, we trained our model on a dataset consisting of bigrams before moving to larger trigram-based models. The input was comprised of Spanish (ES) bigrams and the output consisted of English (EN) bigrams. The English vocabulary consisted of 70 words. We took all words in our input Spanish vocabulary and analyzed the convolutional activations in Figure 6. Each point represents a different instance of a word.

Figure 6: Learned word-level embeddings. Each dot is the encoder output state’s t-SNE 2D embedding. Colors denote different Spanish words.

It is clear that our model can learn representations of individual Spanish words despite being trained on input biagrams in Spanish with output labels in English.

5 Conclusion

In computer vision, the community first made significant deep learning advances with on ImageNet (Russakovsky et al., 2015) by predicting single label classes (Krizhevsky et al., 2012). The community progressively moved to sequence outputs such as image captioning (Xu et al., 2015; Donahue et al., 2015; Karpathy and Fei-Fei, 2015). Eventually, end-to-end models began to produce entire paragraphs of text describing a single image (Krause et al., 2017).

Similar to the story in computer vision, in this work, we presented a method for end-to-end spoken language translation on short phrases consisting of a few words. Using a newly collected dataset of multiple speakers in multiple languages, our method is able to learn acoustic and language features while being able to generalize to unseen speakers. We hope the speech and signal processing community will build on our work, moving to larger and more complex models for even longer sentences and full paragraphs.

References

  • Arik et al. (2017) Sercan O Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Jonathan Raiman, Shubho Sengupta, et al. 2017. Deep voice: Real-time neural text-to-speech. arXiv .
  • Arslan and Hansen (1996) Levent M Arslan and John HL Hansen. 1996. Language accent classification in american english. Speech Communication .
  • Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv .
  • Bahl et al. (1986) Lalit Bahl, Peter Brown, Peter De Souza, and Robert Mercer. 1986. Maximum mutual information estimation of hidden markov model parameters for speech recognition. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
  • Bengio et al. (1994) Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. Transactions on Neural Networks .
  • Bourlard and Morgan (1990) Herve Bourlard and Nelson Morgan. 1990. A continuous speech recognition system embedding mlp into hmm. In Neural Information Processing Systems (NIPS).
  • Chan et al. (2016) William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Interspeech.
  • Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv .
  • Chorowski et al. (2015) Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Neural Information Processing Systems (NIPS).
  • Donahue et al. (2015) Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Computer Vision and Pattern Recognition (CVPR).
  • Embick et al. (2000) David Embick, Alec Marantz, Yasushi Miyashita, Wayne O’Neil, and Kuniyoshi L Sakai. 2000. A syntactic specialization for broca’s area. Proceedings of the National Academy of Sciences .
  • Garofolo et al. (1993) John S Garofolo, Lori F Lamel, William M Fisher, Jonathon G Fiscus, and David S Pallett. 1993. Darpa timit acoustic-phonetic continous speech corpus cd-rom. nist speech disc 1-1.1. NASA STI/Recon Technical Report .
  • Geschwind and Levitsky (1968) Norman Geschwind and Walter Levitsky. 1968. Human brain: left-right asymmetries in temporal speech region. Science .
  • Graves et al. (2006) Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In International Conference on Machine Learning (ICML).
  • Graves and Jaitly (2014) Alex Graves and Navdeep Jaitly. 2014. Towards end-to-end speech recognition with recurrent neural networks. In International Conference on Machine Learning (ICML).
  • Haykin et al. (2001) Simon S Haykin et al. 2001. Kalman filtering and neural networks. Wiley Online Library.
  • Hickok and Poeppel (2007) Gregory Hickok and David Poeppel. 2007. The cortical organization of speech processing. Nature Reviews Neuroscience .
  • Karpathy and Fei-Fei (2015) Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In Computer Vision and Pattern Recognition (CVPR).
  • Konig and Morgan (1992) Yochai Konig and Nelson Morgan. 1992. Gdnn: A gender-dependent neural network for continuous speech recognition. In Neural Networks. International Joint Conference on.
  • Koutnik et al. (2014) Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. 2014. A clockwork rnn. arXiv .
  • Krause et al. (2017) Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for generating descriptive image paragraphs. Computer Vision and Pattern Recognition (CVPR) .
  • Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Neural Information Processing Systems (NIPS).
  • Levinson (1986) Stephen E Levinson. 1986. Continuously variable duration hidden markov models for automatic speech recognition. Computer Speech & Language .
  • Li et al. (2016) Jiwei Li, Alexander H Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016. Dialogue learning with human-in-the-loop. arXiv .
  • Logan et al. (2000) Beth Logan et al. 2000. Mel frequency cepstral coefficients for music modeling. In ISMIR.
  • Luong et al. (2015) Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv .
  • Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv .
  • Nawab et al. (1983) S Nawab, T Quatieri, and Jae Lim. 1983. Signal reconstruction from short-time fourier transform magnitude. Transactions on Acoustics, Speech, and Signal Processing .
  • Newman et al. (2012) Aaron J Newman, Antoine Tremblay, Emily S Nichols, Helen J Neville, and Michael T Ullman. 2012. The influence of language proficiency on lexical semantic processing in native and late learners of english. Journal of Cognitive Neuroscience .
  • Nwe et al. (2003) Tin Lay Nwe, Say Wei Foo, and Liyanage C De Silva. 2003. Speech emotion recognition using hidden markov models. Speech Communication .
  • Oehler and Gray (1995) Karen L. Oehler and Robert M. Gray. 1995. Combining image compression and classification using vector quantization. Transactions on pattern Analysis and Machine Intelligence (T-PAMI) .
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
  • Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV) .
  • Sainath et al. (2015) Tara N Sainath, Ron J Weiss, Andrew Senior, Kevin W Wilson, and Oriol Vinyals. 2015. Learning the speech front-end with raw waveform cldnns. In Interspeech.
  • Schuller et al. (2003) Björn Schuller, Gerhard Rigoll, and Manfred Lang. 2003. Hidden markov model-based speech emotion recognition. In International Conference on Multimedia and Expo.
  • Sotelo et al. (2017) Jose Sotelo, Soroush Mehri, Kundan Kumar, Joao Felipe Santos, Kyle Kastner, Aaron Courville, and Yoshua Bengio. 2017. Char2wav: End-to-end speech synthesis. arXiv .
  • Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Neural Information Processing Systems (NIPS).
  • van den Oord et al. (2016) Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv .
  • Venugopalan et al. (2014) Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2014. Translating videos to natural language using deep recurrent neural networks. arXiv .
  • Vinyals et al. (2015) Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Neural Information Processing Systems (NIPS).
  • Wang et al. (2016) Wenfu Wang, Shuang Xu, and Bo Xu. 2016. First step towards end-to-end parametric tts synthesis: Generating spectral parameters with neural attention. In Interspeech.
  • Wang et al. (2017) Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. 2017. Tacotron: Towards end-to-end speech synthesis. Interspeech .
  • Wilpon et al. (1990) Jay G Wilpon, Lawrence R Rabiner, C-H Lee, and ER Goldman. 1990. Automatic recognition of keywords in unconstrained speech using hidden markov models. Interspeech .
  • Wilson et al. (2005) Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing. Association for Computational Linguistics.
  • Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning (ICML).
  • Zou et al. (2013) Will Y Zou, Richard Socher, Daniel M Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Comments 1
Request Comment
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
356641
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
1

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description