Vid2speech: Speech Reconstruction From Silent Video
Speechreading is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible acoustic speech signal from silent video frames of a speaking person. The proposed CNN generates sound features for each frame based on its neighboring frames. Waveforms are then synthesized from the learned speech features to produce intelligible speech. We show that by leveraging the automatic feature learning capabilities of a CNN, we can obtain state-of-the-art word intelligibility on the GRID dataset, and show promising results for learning out-of-vocabulary (OOV) words.
VID2SPEECH: SPEECH RECONSTRUCTION FROM SILENT VIDEO
|Ariel Ephrat and Shmuel Peleg ††thanks: This research was supported by Israel Science Foundation, by DFG and by Intel ICRI-CI.|
|The Hebrew University of Jerusalem|
Index Terms— Speechreading, visual speech processing, articulatory-to-acoustic mapping, speech intelligibility, neural networks
Speechreading is the task of obtaining reliable phonetic information from a speaker’s face during speech perception. It has been described as “trying to grasp with one sense information meant for another”. Given the fact that often several phonemes (phonetic units of speech) correspond to a single viseme (visual unit of speech), it is a notoriously difficult task for humans to perform.
Several applications come to mind for automatic video-to-speech systems: Enabling videoconferencing from within a noisy environment; facilitating conversation at a party with loud music between people having wearable cameras and earpieces; maybe even using surveillance video as a long-range listening device.
Much work has been done in the area of automating speechreading by computers [1, 2, 3]. There are two main approaches to this task. The first, and the one most widely attempted in the past, consists of modeling speechreading as a classification problem. In this approach, the input video is manually segmented into short clips which contain either whole words from a predefined dictionary, or parts of words comprising phonemes or visemes . Then, visual features are extracted from the frames and fed to a classifier. Wand et al. , Assael et al.  and Chung et al.  have all recently showed state-of-the-art word and sentence-level classification results using neural network-based models.
The second approach, and the one used in this work, is to model speechreading as an articulatory-to-acoustic mapping problem in which the “label” of each short video segment is a corresponding feature vector representing the audio signal. Kello and Plaut  and Hueber and Bailly  attempted this approach using various sensors to record mouth movements. Le Cornu and Milner  took this direction in a recent work where they used hand-crafted visual features to produce intelligible audio.
A major advantage of this model of learning is its non-dependency on a particular segmentation of the input data into words or sub-words. It does not either need to have explicit manually-annotated labels, but rather uses “natural supervision” , in which the prediction target is derived from a natural signal in the world. A regression-based model is also vocabulary-agnostic. Given a training set with a large enough representation of the phonemes/visemes of a particular language, it can reconstruct words that are not present in the training set. Classification at the sub-word level can also have the same effect. Another advantage to this model is its ability to reconstruct the non-textual parts of human speech, e.g. emotion, prosody, etc.
Researchers have spent much time and effort finding visual features which accurately map facial movements to auditory signal. We bypass the need for feature crafting by utilizing CNNs, which have brought significant advances to computer vision in recent years. Given raw visual data as input, our network automatically learns optimal visual features for reconstructing an acoustic signal closest to the original.
In this paper, we: (1) Present an end-to-end CNN-based model that predicts the speech audio signal of a silent video of a person speaking, significantly improving state-of-the-art reconstructed speech intelligibility; (2) demonstrate that allowing the model to learn from the speaker’s entire face instead of only the mouth region greatly improves performance; (3) show that modeling speechreading as a regression problem allows us to reconstruct out-of-vocabulary words.
2 Speech representation
The challenge of finding a suitable representation for an acoustic speech signal which can be estimated by a neural network on one hand, and synthesized back into intelligible audio on the other, is not trivial. Spectrogram magnitude, for example, can be used as network output, however the quality of its resynthesis into speech is usually poor, as it does not contain phase information. Use of raw waveform as network output was ruled out for lack of a suitable loss function with which to train the network.
Linear Predictive Coding (LPC) is a powerful and widely used technique for representing the spectral envelope of a digital speech signal, which assumes a source-filter model of speech production . LPC analysis is applied to overlapping audio frames of the original speech signal, resulting in an LPC coefficient vector whose order P can be tuned. Line Spectrum Pairs (LSP)  are a representation of LPC coefficients which are more stable and robust to quantization and small coefficient deviations. LSPs are therefore useful for speech coding and transmission over a channel, and indeed proved to be well suited to the task at hand.
We apply the following procedure to calculate audio features suitable for use as neural network output: First, the audio from each video sequence is downsampled to kHz and split into audio frames of ( samples) each, with an overlap of . -order LPC analysis is applied to each audio frame, as done by , followed by LSP decomposition, resulting in a feature vector of length per frame. While -order LPC is relatively low for high-fidelity modeling of the speech spectrum, we did so in order to isolate the effect of using CNN-learned visual features versus the hand-crafted ones of . Each video frame has two successive corresponding feature vectors, which are concatenated to form a sound vector, . See Figure 1 for an illustration of this procedure. Finally, the vectors are standardized element-wise by subtracting the mean and dividing by the standard deviation of each element.
3 Predicting speech
3.1 Regressing sound features
Given a sequence of input frames we would like to estimate a corresponding sequence of sound features where .
Our goal is to reconstruct a single audio representation vector which corresponds to the duration of a single video frame . However, instantaneous lip movements such as those in isolated video frames can be significantly disambiguated by using a temporal neighborhood as context. Therefore, the input to our network is a clip of consecutive grayscale video frames, out of which the speaker’s face is cropped and scaled to pixels. This results in an input volume of size scalars, which is then normalized by dividing by maximum pixel intensity and subtracting the mean.
Figure 2 illustrates the importance of allowing the network to learn visual features from the entire face, as opposed to the mouth region only, as widely done in the past. The two lines in the graph represent final network test error as a function of the length of the clip used as input to the CNN. We tested the values of , while the output always remained the sound features of the center frame. Not surprisingly, the largest gain in performance for both face and mouth regions is when clip length is increased from frame to frames, highlighting the importance of context. The advantage of learning features from the full facial information is also evident, with the best face region error lower than the best mouth region error (both at ). We hypothesize that this is as result of our CNN using the increased amount of visual information to disambiguate similar mouth movements.
Sound prediction model
We use a convolutional neural network (CNN) that takes the aforementioned video clip of size as input. Our network uses VGG-like  stacks of small receptive fields in its convolutional layers. The architecture comprises five consecutive blocks consisting of kernels, respectively. These are followed by two fully connected layers with neurons each. The last layer of our CNN is of size which corresponds to the size of the sound representation vectors we wish to predict. The network is trained with backpropagation using mean squared error (MSE) loss.
3.2 Generating a waveform
Source-filter speech synthesizers such as  use both filter parameters as well as an excitation signal to construct an acoustic signal from LPC features. Predicting excitation parameters is out of the scope of this work, and we therefore use Gaussian white noise as the excitation signal. This produces an unvoiced speech signal and results in unnatural sounding speech. Although this method of generating a waveform is relatively simplistic, we found that it worked quite well for speech intelligibility purposes, which is the focus of our work.
We applied our speech-reconstruction model to several tasks, and evaluated it with a human listening study.111Examples of reconstructed speech can be found at
Our network implementation is based on the Keras library  built on top of TensorFlow . Network weights are initialized using the initialization procedure suggested by He et al. . We use Leaky ReLU  as the non-linear activation function in all layers but the last two, in which we use the hyperbolic tangent (tanh) function. Adam optimizer  is used with a learning rate of . Dropout  is used to prevent overfitting, with a rate of after convolutional layers and after fully connected ones. We use mini-batches of training samples each and stop training when the validation loss stops decreasing (around 80 epochs). Training is done using a single Nvidia Titan Black GPU. We use a cascade-based face detector from OpenCV , and crop out the mouth region for the comparison in Figure 2 by using a hard-coded mask. For LPC analysis/resynthesis, as well as excitation generation, we used pysptk, a Python wrapper for Speech Signal Processing Toolkit (SPTK) .
4.1 GRID corpus
We performed our experiments on the GRID audiovisual sentence corpus , a large dataset of audio and video (facial) recordings of sentences spoken by talkers ( male, female). Each sentence consists of a six word sequence of the form shown in Table 1, e.g. “Place green at H now”.
A total of different words are contained in the GRID corpus. Videos have a fixed duration of seconds at a frame rate of FPS with resolution, resulting in sequences comprising frames. These videos are preprocessed as described in Section 3.1 before feeding them into the network. The acoustic part of the GRID corpus is used as described in Section 2.
In order to accurately compare our results with , we performed our experiments on the videos of speaker four (, female) as done there. The training/testing split for each experiment will be described in the following sections.
4.2 Sound prediction tasks
Reconstruction from full dataset
The first task, proposed by , is designed to examine whether reconstructing audio from visual features can produce intelligible speech. For this task we trained our model on a random train/test split of the videos of and made sure that all GRID words were represented in each set. The resulting representation vectors were converted back into waveform using unvoiced excitation, and two different multimedia configurations were constructed: the predicted audio-only and the combination of the original video with reconstructed audio.
Reconstructing out-of-vocabulary words
As cited earlier, regression-based models can be used to reconstruct out-of-vocabulary (OOV) words. To test this, we performed the following experiment: The videos in our dataset were sorted according to the digit uttered in each sentence, and our network was trained and tested on five different train/test splits - each with two distinct digits left out of the training set. For example, the network was trained on all sequences with the numbers uttered, and tested only on sequences containing the numbers and .
4.3 Evaluating the speech predictions
We assessed the intelligibility of the reconstructed speech using a human listening study done using Amazon Mechanical Turk (MTurk). Each job consisted of transcribing one of three types of 3-second clips: audio-only, audio-visual and OOV audio-visual. The listeners were unaware of the differences between the clips. For each clip, they were given the GRID vocabulary and tasked with classifying each reconstructed word into one of its possible options. All together, over videos containing distinct sequences were transcribed by different MTurk workers, which is comparable to the -listener study done by .
Table 2 shows the results of our first task, reconstruction from the full dataset, along with a comparison to . Our reconstructed audio is significantly more intelligible than the best results of , as shown by both audio-only and audio-visual tests. The final column shows the result of retraining and testing our model on another speaker from the GRID corpus, speaker two (, male), whose speech clarity is comparable to , as reported by . We used the same listening test methodology described above, however this time only using combined audio and video. Examples of original vs. reconstructed LSP coefficients, waveform and spectrogram for this task can be seen in Figure 3.
Results for the OOV task which appear in Table 3 were obtained by averaging digit annotation accuracies of the five train/test splits. The fact that human subjects were over five times more likely than chance to choose the correct digit uttered after listening to the reconstructed audio shows that using regression to solve the OOV problem is a promising direction. Moreover, using a larger and more diversified training set vocabulary is likely to significantly increase OOV reconstruction intelligibility.
5 Concluding remarks
This work has proven the feasibility of reconstructing an intelligible audio speech signal from silent videos frames. OOV word reconstruction was also shown to hold promise by modeling automatic speechreading as a regression problem, and using a CNN to automatically learn relevant visual features.
The work described in this paper can serve as a basis for several directions of further research. These include using a less constrained video dataset to show real-world reconstruction viability and generalizing to speaker-independent and multiple speaker reconstruction.
-  Eric David Petajan, Automatic lipreading to enhance speech recognition (speech reading), Ph.D. thesis, University of Illinois at Urbana-Champaign, 1984.
-  Iain Matthews, Timothy F Cootes, J Andrew Bangham, Stephen Cox, and Richard Harvey, “Extraction of visual features for lipreading,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 198–213, 2002.
-  Ziheng Zhou, Guoying Zhao, Xiaopeng Hong, and Matti Pietikäinen, “A review of recent advances in visual speech decoding,” Image and vision computing, vol. 32, no. 9, pp. 590–605, 2014.
-  Helen L Bear and Richard Harvey, “Decoding visemes: Improving machine lip-reading,” in ICASSP’16, 2016, pp. 2009–2013.
-  Michael Wand, Jan Koutn, et al., “Lipreading with long short-term memory,” in ICASSP’16, 2016, pp. 6115–6119.
-  Yannis M Assael, Brendan Shillingford, Shimon Whiteson, and Nando de Freitas, “Lipnet: End-to-end sentence-level lipreading,” arXiv preprint arXiv:1611.01599, 2016.
-  Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman, “Lip reading sentences in the wild,” arXiv preprint arXiv:1611.05358, 2016.
-  Christopher T Kello and David C Plaut, “A neural network model of the articulatory-acoustic forward mapping trained on recordings of articulatory parameters,” The Journal of the Acoustical Society of America, vol. 116, no. 4, pp. 2354–2364, 2004.
-  Thomas Hueber and Gérard Bailly, “Statistical conversion of silent articulation into audible speech using full-covariance hmm,” Comput. Speech Lang., vol. 36, no. C, pp. 274–293, Mar. 2016.
-  Thomas Le Cornu and Ben Milner, “Reconstructing intelligible audio speech from visual speech features,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
-  Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H Adelson, and William T Freeman, “Visually indicated sounds,” in CVPR’16, 2016.
-  Gunnar Fant, Acoustic theory of speech production: with calculations based on X-ray studies of Russian articulations, vol. 2, Walter de Gruyter, 1971.
-  Fumitada Itakura, “Line spectrum representation of linear predictor coefficients of speech signals,” The Journal of the Acoustical Society of America, vol. 57, no. S1, pp. S35–S35, 1975.
-  Karen Simonyan and Andrew Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
-  Dennis H Klatt and Laura C Klatt, “Analysis, synthesis, and perception of voice quality variations among female and male talkers,” the Journal of the Acoustical Society of America, vol. 87, no. 2, pp. 820–857, 1990.
-  François Chollet, “Keras,” https://github.com/fchollet/keras, 2015.
-  “Tensorflow,” Software available from http://tensorflow.org/.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in ICCV’15, 2015.
-  Andrew L Maas, Awni Y Hannun, and Andrew Y Ng, “Rectifier nonlinearities improve neural network acoustic models,” in ICML’13, 2013.
-  Diederik Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980, 2014.
-  Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting.,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
-  G. Bradski, “Opencv,” Dr. Dobb’s Journal of Software Tools, 2000.
-  “Speech signal processing toolkit,” Available from http://sp-tk.sourceforge.net/readme.php.
-  Martin Cooke, Jon Barker, Stuart Cunningham, and Xu Shao, “An audio-visual corpus for speech perception and automatic speech recognition,” The Journal of the Acoustical Society of America, vol. 120, no. 5, pp. 2421–2424, 2006.