VoiceFilter: Targeted Voice Separation by Speaker-Conditioned Spectrogram Masking

VoiceFilter: Targeted Voice Separation by
Speaker-Conditioned Spectrogram Masking


In this paper, we present a novel system that separates the voice of a target speaker from multi-speaker signals, by making use of a reference signal from the target speaker. We achieve this by training two separate neural networks: (1) A speaker recognition network that produces speaker-discriminative embeddings; (2) A spectrogram masking network that takes both noisy spectrogram and speaker embedding as input, and produces a mask. Our system significantly reduces the speech recognition WER on multi-speaker signals, with minimal WER degradation on single-speaker signals.

VoiceFilter: Targeted Voice Separation by

Speaker-Conditioned Spectrogram Masking

Quan Wang* 1  Hannah Muckenhirn* 2,3  Kevin Wilson1  Prashant Sridhar1
Zelin Wu1  John Hershey1  Rif A. Saurous1  Ron J. Weiss1  Ye Jia1  Ignacio Lopez Moreno1 thanks: * Equal contribution. Hannah performed this work as an intern at Google.

1Google Inc., USA   2Idiap Research Institute, Switzerland   3EPFL, Switzerland

quanw@google.com   hannah.muckenhirn@idiap.ch

Index Terms: Source separation, speaker recognition, spectrogram masking, speech recognition

1 Introduction

Recent advances in speech recognition have led to performance improvement in challenging scenarios such as noisy and far-field conditions. However, speech recognition systems still perform poorly when the speaker of interest is recorded in crowded environments, i.e., with interfering speakers in the foreground or background.

One way to deal with this issue is to first apply a speech separation system on the noisy audio in order to separate the voices from different speakers. Therefore, if the noisy signal contains speakers, this approach would yield outputs with a potential additional output for the ambient noise. A classical speech separation task like this needs to cope with two main challenges. First, identifying the number of speakers in the recording, which in realistic scenarios is unknown. Secondly, the optimization of a speech separation system may be required to be invariant to the permutation of speaker labels, as the order of the speakers should not have an impact during training [1]. Leveraging the advances in deep neural networks, several successful works have been introduced to address these problems, such as deep clustering [1], deep attractor network [2], and permutation invariant training [3].

This work addresses the task of isolating the voices of a subset of speakers of interest from the commonality of all the other speakers and noises. For example, such subset can be formed by a single target speaker issuing a spoken query to a personal mobile device, or the members of a house talking to a shared home device. We will also assume that the speaker(s) of interest can be individually characterized by previous reference recordings, e.g. through an enrollment stage. This task is closely related to classical speech separation, but in a way that it is speaker-dependent. In this paper, we will refer to the task of speaker-dependent speech separation as voice filtering (some literature call it speaker extraction). We argue that for voice filtering, speaker-independent techniques such as those presented in [1, 2, 3] may not be a good fit. In addition to the challenges described previously, these techniques require an extra step to determine which output – out of the possible outputs of the speech separation system – corresponds to the target speaker(s), by e.g. choosing the loudest speaker, running a speaker verification system on the outputs, or matching a specific keyword.

A more end-to-end approach for the voice filtering task is to treat the problem as a binary classification problem, where the positive class is the speech of the speaker of interest, and the negative class is formed by the combination of all foreground and background interfering speakers and noises. By speaker-conditioning the system, this approach suppresses the three aforementioned challenges: unknown number of speakers, permutation problem, and selection from multiple outputs. In this work, we aim to condition the system on the speaker embedding vector of a reference recording. The proposed approach is the following. We first train a LSTM-based speaker encoder to compute robust speaker embedding vectors. We then train separately a time-frequency mask-based system that takes two inputs: (1) the embedding vector of the target speaker, previously computed with the speaker encoder; and (2) the noisy multi-speaker audio. This system is trained to remove the interfering speakers and output only the voice of the target speaker.111Samples of output audios are available at: https://google.github.io/speaker-id/publications/VoiceFilter This approach can be easily extended to more than one speaker of interest by repeating the process in turns, for the reference recording of each target speaker.

Similar related literature exists for the task of voice filtering. For example, in [4, 5], the authors achieved impressive results by doing an indirect speaker conditioning of the system on the visual information (lips movement). However, a solution like that would require simultaneously using speech and visual information, which may not be available in certain type of applications, where a reference speech signal may be more practical. The systems proposed in  [6, 7, 8, 9] are also very similar to ours, with a few major differences: (1) Unlike using one-hot vectors, i-vectors or speaker posteriors derived from a cross-entropy classification network, our speaker encoder network is specifically designed for large-scale end-to-end speaker recognition [10], which proves to perform much better in speaker recognition tasks, especially for unseen speakers. (2) Instead of using a GEV beamformer [6, 8], our system directly optimizes the power-law compressed reconstruction error between the clean and enhanced signals [11]. (3) In addition to source-to-distortion ratio [6, 7], we focus on Word Error Rate improvements for ASR systems. (4) We use dilated convolutional layers to capture low-level acoustic features more effectively. (5) We prefer separately trained speaker encoder network over joint training like [8, 9], due to the very different requirements for data in speaker recognition and source separation tasks.

The rest of this paper is organized as follows. In Section 2, we describe our approach to the problem, and provide the details of how we train the neural networks. In Section 3, we describe our experimental setup, including the datasets we use and the evaluation metrics. The experimental results are presented in Section 4. We draw our conclusions in Section 5, with discussions on future work directions.

Figure 1: System architecture.

2 Approach

The system architecture is shown in Fig. 1. The system consists of two separately trained components: the speaker encoder (in red), and the VoiceFilter system (in blue), which uses the output of the speaker encoder as an additional input. In this section, we will describe these two components.

2.1 Speaker encoder

The purpose of the speaker encoder is to produce a speaker embedding from an audio sample of the target speaker. This system is based on a recent work from Wan et al. [10], which achieves great performance on both text-dependent and text-independent speaker verification tasks, as well as on speaker diarization [12, 13], multispeaker TTS [14], and speech-to-speech translation [15].

The speaker encoder is a 3-layer LSTM network trained with the generalized end-to-end loss [10]. It takes as inputs log-mel filterbank energies extracted from windows of 1600 ms, and outputs speaker embeddings, called d-vectors, which have a fixed dimension of 256. To compute a d-vector on one utterance, we extract sliding windows with 50% overlap, and average the L2-normalized d-vectors obtained on each window.

2.2 VoiceFilter system

The VoiceFilter system is based on the recent work of Wilson et al. [11], developed for speech enhancement. As shown in Fig. 1, the neural network takes two inputs: a d-vector of the target speaker, and a magnitude spectrogram computed from a noisy audio. The network predicts a soft mask, which is element-wise multiplied with the input (noisy) magnitude spectrogram to produce an enhanced magnitude spectrogram. To obtain the enhanced waveform, we directly merge the phase of the noisy audio to the enhanced magnitude spectrogram, and apply an inverse STFT on the result. The network is trained to minimize the difference between the masked magnitude spectrogram and the target magnitude spectrogram computed from the clean audio.

The VoiceFilter network is composed of 8 convolutional layers, 1 LSTM layer, and 2 fully connected layers, each with ReLU activations except the last layer, which has a sigmoid activation. The values of the parameters are provided in Table 1. The d-vector is repeatedly concatenated to the output of the last convolutional layer in every time frame. The resulting concatenated vector is then fed as the input to the following LSTM layers. We decide to inject the d-vector between the convolutional layers and the LSTM layer and not before the convolutional layers for two reasons. First, the d-vector is already a compact and robust representation of the target speaker, thus we do not need to modify it by applying convolutional layers on top of it. Secondly, convolutional layers assume time and frequency homogeneity, and thus cannot be applied on an input composed of two completely different signals: a magnitude spectrogram and a speaker embedding.

Layer Width Dilation Filters / Nodes
time freq time freq
CNN 1 1 7 1 1 64
CNN 2 7 1 1 1 64
CNN 3 5 5 1 1 64
CNN 4 5 5 2 1 64
CNN 5 5 5 4 1 64
CNN 6 5 5 8 1 64
CNN 7 5 5 16 1 64
CNN 8 1 1 1 1 8
LSTM - - - - 400
FC 1 - - - - 600
FC 2 - - - - 600
Table 1: Parameters of the VoiceFilter network.

While training the VoiceFilter system, the input audios are divided into segments of 3 seconds each and are converted, if necessary, to single channel audios with a sampling rate of 16 kHz.

3 Experimental setup

In this section, we describe our experimental setup: the data used to train the two components of the system separately, as well as the metrics to assess the systems.

3.1 Data

3.1.1 Datasets

Speaker encoder:

Although our speaker encoder network has exactly the same network topology as the text-independent model described in [10], we use more training data in this system. Our speaker encoder is trained with two datasets combined by the MultiReader technique introduced in [10]. The first dataset consists of anonymized voice query logs in English from mobile and farfield devices. It has about 34 million utterances from about 138 thousand speakers. The second dataset consists of LibriSpeech [16], VoxCeleb [17], and VoxCeleb2 [18]. This model (referred to as “d-vector V2” in [13]) has a 3.06% equal error rate (EER) on our internal en-US phone audio test dataset, compared to the 3.55% EER of the one reported in [10].


We cannot use a “standard” benchmark corpus for speech separation, such as one of the CHiME challenges [19], because we need a clean reference utterance of each target speaker in order to compute speaker embeddings. Instead, we train and evaluate the VoiceFilter system on our own generated data, derived either from the VCTK dataset [20] or from the LibriSpeech dataset [16]. For VCTK, we randomly take 99 speakers for training, and 10 speakers for testing. For LibriSpeech, we used the training and development sets defined in the protocol of the dataset: the training set contains 2338 speakers, and the development set contains 73 speakers. These two datasets contain read speech, and each utterance contains the voice of one speaker. We explain in the next section how we generate the data used to train the VoiceFilter system.

3.1.2 Data generation

Figure 2: Input data processing workflow.

From the system diagram in Fig. 1, we see that one training step involves three inputs: (1) the clean audio from the target speaker, which is the ground truth; (2) the noisy audio containing multiple speakers; and (3) a reference audio from the target speaker (different from the clean audio) over which the d-vector will be computed.

This training triplet can be obtained by using three audios from a clean dataset, as shown in Fig. 2. The reference audio is picked randomly among all the utterances of the target speaker, and is different from the clean audio. The noisy audio is generated by mixing the clean audio and an interfering audio randomly selected from a different speaker. More specifically, it is obtained by directly summing the clean audio and the interfering audio, then trimming the result to the length of the clean audio.

We have also tried to multiply the interfering audio by a random weight following a uniform distribution either within or within . However, this did not affect the performance of the VoiceFilter system in our experiments.

3.2 Evaluation

To evaluate the performance of different VoiceFilter models, we use two metrics: the speech recognition Word Error Rate (WER) and the Source to Distortion Ratio (SDR).

3.2.1 Word error rate

As mentioned in Sec. 1, the main goal of our system is to improve speech recognition. Specifically, we want to reduce the WER in multi-speaker scenarios, while preserving the same WER in single-speaker scenarios. The speech recognizer we use for WER evaluation is a version of the conventional phone models discussed in [21], which is trained on a YouTube dataset.

For each VoiceFilter model, we care about four WER numbers:

  • Clean WER: Without VoiceFilter, the WER on the clean audio.

  • Noisy WER: Without VoiceFilter, the WER on the noisy (clean + interence) audio.

  • Clean-enhanced WER: the WER on the clean audio processed by the VoiceFilter system.

  • Noisy-enhanced WER: the WER on the noisy audio processed by the VoiceFilter system.

A good VoiceFilter model should have these two properties:

  1. Noisy-enhanced WER is significantly lower than Noisy WER, meaning that the VoiceFilter is improving speech recognition in multi-speaker scenarios.

  2. Clean-enhanced WER is very close to Clean WER, meaning that the VoiceFilter has minimal negative impact on single-speaker scenarios.

3.2.2 Source to distortion ratio

The SDR is a very common metric to evaluate source separation systems [22], which requires to know both the clean signal and the enhanced signal. It is an energy ratio, expressed in dB, between the energy of the target signal contained in the enhanced signal and the energy of the errors (coming from the interfering speakers and artifacts). Thus, the higher it is, the better.

4 Results

4.1 Word error rate

VoiceFilter Model Clean Noisy
WER (%) WER (%)
No VoiceFilter 10.9 55.9
VoiceFilter: no LSTM 12.2 35.3
VoiceFilter: LSTM 12.2 28.2
VoiceFilter: bi-LSTM 11.1 23.4
Table 2: Speech recognition WER on LibriSpeech. VoiceFilter is trained on LibriSpeech.
VoiceFilter Model Clean Noisy
WER (%) WER (%)
No VoiceFilter 6.1 60.6
Trained on VCTK 21.1 37.0
Trained on LibriSpeech 5.9 34.3
Table 3: Speech recognition WER on VCTK. LSTM layer is uni-directional. Model architecture is shown in Table 1.

In Table 2, we present the results of VoiceFilter models trained and evaluated on the LibriSpeech dataset. The architecture of the VoiceFilter system is shown in Table 1, with a few different variations of the LSTM layer: (1) no LSTM layer, i.e., only convolutional layers directly followed by fully connected layers; (2) a uni-directional LSTM layer; (3) a bi-directional LSTM layer. In general, after applying VoiceFilter, the WER on the noisy data is significantly lower than before, while the WER on the clean dataset remains close to before. There is a significant gap between the first and second model, meaning that processing the data sequentially with an LSTM is an important component of the system. Morever, using a bi-directional LSTM layer we achieve the best WER on the noisy data. With this model, applying the VoiceFilter system on the noisy data reduces the speech recognition WER by a relative 58.1%. In the clean scenario, the performance degradation caused by the VoiceFilter system is very small: the WER is 11.1% instead of 10.9%.

In Table 3, we present the WER results of VoiceFilter models evaluated on the VCTK dataset. With a VoiceFilter model trained also on VCTK, the WER on the noisy data after applying VoiceFilter is significantly lower than before, reduced relatively by 38.9%. However, the WER on the clean data after applying VoiceFilter is significantly higher. This is mostly because the VCTK training set is too small, containing only 99 speakers. If we use a VoiceFilter model trained on LibriSpeech instead, the WER on the noisy dataset further decreases, while the WER on the clean data reduces to 5.9%, which is even smaller than before applying VoiceFilter. This means: (1) The VoiceFilter model is able to generalize from one dataset to another; (2) We are improving the acoustic quality of the original clean audios, even if we did not explicitly train it this way.

Note that the LibriSpeech training set contains about 20 times more speakers than VCTK (2338 speakers instead of 99 speakers), which is the major difference between the two models shown in Table 3. Thus, the results also imply that we could further improve our VoiceFilter model by training it with even more speakers.

4.2 Source to distortion ratio

VoiceFilter Model Mean SDR Median SDR
No VoiceFilter 10.1 2.5
VoiceFilter: no LSTM 11.9 9.7
VoiceFilter: LSTM 15.6 11.3
VoiceFilter: bi-LSTM 17.9 12.6
PermInv: bi-LSTM 17.2 11.9
Table 4: Source to distortion ratio on LibriSpeech. Unit is dB. PermInv stands for permutation invariant loss [3]. Mean SDR for “No VoiceFilter” is high since some clean signals are mixed with silent parts of interference signals.

We present the SDR numbers in Table 4. The results follow the same trend as the WER in Table 2. The bi-directional LSTM approach in the VoiceFilter achieves the highest SDR.

We also compare the VoiceFilter results to a speech separation model that uses the permutation invariant loss [3]. This model has the same architecture as the VoiceFilter system (with a bi-directional LSTM), presented in Table 1, but is not fed with speaker embeddings. Instead, it separates the noisy input into two components, corresponding to the clean and the interfering audio, and chooses the output that is the closest to the ground truth, i.e., with the lowest SDR. This system can be seen as an “oracle” system as it knows both the number of sources contained in the noisy signal as well as the ground truth clean signal. As explained in Section 1, using such a system in practice would require to: 1) estimate how many speakers are in the noisy input, and 2) choose which output to select, e.g. by running a speaker verification system on each output (which might not be efficient if there are a lot of interfering speakers).

We observe that the VoiceFilter system outperforms the permutation invariant loss based system. This shows that not only our system solves the two aforementioned issues, but using a speaker embedding also improves the capability of the system to extract the source of interest (with a higher SDR).

4.3 Discussions

In Table 2, we tried a few variants of the VoiceFilter model on LibriSpeech, and the best WER performance was achieved with a bi-directional LSTM. However, it is likely that a similar performance could be achieved by adding more layers or nodes to uni-directional LSTM. Future work includes exploring more variants and fine-tuning the hyper-parameters to achieve better performance with lower computational cost, but that is beyond the focus of this paper.

5 Conclusions and future work

In this paper, we have demonstrated the effectiveness of using a discriminatively-trained speaker encoder to condition the speech separation task. Such a system is more applicable to real scenarios because it does not require prior knowledge about the number of speakers and avoids the permutation problem. We have shown that a VoiceFilter model trained on the LibriSpeech dataset reduces the speech recognition WER from 55.9% to 23.4% in two-speaker scenarios, while the WER stays approximately the same on single-speaker scenarios.

This system could be improved by taking a few steps: (1) training on larger and more challenging datasets such as VoxCeleb 1 and 2 [18]; (2) adding more interfering speakers; and (3) computing the d-vectors over several utterances instead of only one to obtain more robust speaker embeddings. Another interesting direction would be to train the VoiceFilter system to perform joint voice separation and speech enhancement, i.e., to remove both the interfering speakers and the ambient noise. To do so, we could add different noises when mixing the clean audio with interfering utterances. This approach will be part of future investigations. Finally, the VoiceFilter system could also be trained jointly with the speech recognition system to further increase the WER improvement.

6 Acknowledgements

The authors would like to thank Seungwon Park for open sourcing a third-party implementation of this system.222https://github.com/mindslab-ai/voicefilter We would like to thank Yiteng (Arden) Huang, Jason Pelecanos, and Fadi Biadsy for the helpful discussions.


  • [1] J. R. Hershey, Z. Chen, J. Le Roux, and S. Watanabe, “Deep clustering: Discriminative embeddings for segmentation and separation,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP).    IEEE, 2016, pp. 31–35.
  • [2] Z. Chen, Y. Luo, and N. Mesgarani, “Deep attractor network for single-microphone speaker separation,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP).    IEEE, 2017.
  • [3] D. Yu, M. Kolbæk, Z.-H. Tan, and J. Jensen, “Permutation invariant training of deep models for speaker-independent multi-talker speech separation,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP).    IEEE, 2017, pp. 241–245.
  • [4] A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W. T. Freeman, and M. Rubinstein, “Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation,” in SIGGRAPH, 2018.
  • [5] T. Afouras, J. S. Chung, and A. Zisserman, “The conversation: Deep audio-visual speech enhancement,” in Interspeech, 2018.
  • [6] K. Zmolikova, M. Delcroix, K. Kinoshita, T. Higuchi, A. Ogawa, and T. Nakatani, “Speaker-aware neural network based beamformer for speaker extraction in speech mixtures,” in Interspeech, 2017.
  • [7] J. Wang, J. Chen, D. Su, L. Chen, M. Yu, Y. Qian, and D. Yu, “Deep extractor network for target speaker recovery from single channel speech mixtures,” arXiv preprint arXiv:1807.08974, 2018.
  • [8] K. Žmolíková, M. Delcroix, K. Kinoshita, T. Higuchi, A. Ogawa, and T. Nakatani, “Learning speaker representation for neural network based multichannel speaker extraction,” in Automatic Speech Recognition and Understanding Workshop (ASRU).    IEEE, 2017, pp. 8–15.
  • [9] M. Delcroix, K. Zmolikova, K. Kinoshita, A. Ogawa, and T. Nakatani, “Single channel target speaker extraction and recognition with speaker beam,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP).    IEEE, 2018, pp. 5554–5558.
  • [10] L. Wan, Q. Wang, A. Papir, and I. L. Moreno, “Generalized end-to-end loss for speaker verification,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP).    IEEE, 2018, pp. 4879–4883.
  • [11] K. Wilson, M. Chinen, J. Thorpe, B. Patton, J. Hershey, R. A. Saurous, J. Skoglund, and R. F. Lyon, “Exploring tradeoffs in models for low-latency speech enhancement,” in International Workshop on Acoustic Signal Enhancement (IWAENC).    IEEE, 2018, pp. 366–370.
  • [12] Q. Wang, C. Downey, L. Wan, P. A. Mansfield, and I. L. Moreno, “Speaker diarization with lstm,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP).    IEEE, 2018, pp. 5239–5243.
  • [13] A. Zhang, Q. Wang, Z. Zhu, J. Paisley, and C. Wang, “Fully supervised speaker diarization,” arXiv preprint arXiv:1810.04719, 2018.
  • [14] Y. Jia, Y. Zhang, R. J. Weiss, Q. Wang, J. Shen, F. Ren, Z. Chen, P. Nguyen, R. Pang, I. L. Moreno et al., “Transfer learning from speaker verification to multispeaker text-to-speech synthesis,” in Conference on Neural Information Processing Systems (NIPS), 2018.
  • [15] Y. Jia, R. J. Weiss, F. Biadsy, W. Macherey, M. Johnson, Z. Chen, and Y. Wu, “Direct speech-to-speech translation with a sequence-to-sequence model,” arXiv preprint arXiv:1904.06037, 2019.
  • [16] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: an asr corpus based on public domain audio books,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP).    IEEE, 2015, pp. 5206–5210.
  • [17] A. Nagrani, J. S. Chung, and A. Zisserman, “Voxceleb: a large-scale speaker identification dataset,” in Interspeech, 2017.
  • [18] J. S. Chung, A. Nagrani, and A. Zisserman, “Voxceleb2: Deep speaker recognition,” in Interspeech, 2018.
  • [19] J. P. Barker, R. Marxer, E. Vincent, and S. Watanabe, “The CHiME challenges: Robust speech recognition in everyday environments,” in New Era for Robust Speech Recognition.    Springer, 2017, pp. 327–344.
  • [20] C. Veaux, J. Yamagishi, K. MacDonald et al., “Superseded-cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit,” 2016.
  • [21] H. Soltau, H. Liao, and H. Sak, “Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition,” in Interspeech, 2017.
  • [22] E. Vincent, R. Gribonval, and C. Févotte, “Performance measurement in blind audio source separation,” IEEE transactions on audio, speech, and language processing, vol. 14, no. 4, pp. 1462–1469, 2006.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description