Few Shot Speaker Recognition using Deep Neural Networks
The recent advances in deep learning are mostly driven by availability of large amount of training data. However, availability of such data is not always possible for specific tasks such as speaker recognition where collection of large amount of data is not possible in practical scenarios. Therefore, in this paper, we propose to identify speakers by learning from only a few training examples. To achieve this, we use a deep neural network with prototypical loss where the input to the network is a spectrogram. For output, we project the class feature vectors into a common embedding space, followed by classification. Further, we show the effectiveness of capsule net in a few shot learning setting. To this end, we utilize an auto-encoder to learn generalized feature embeddings from class-specific embeddings obtained from capsule network. We provide exhaustive experiments on publicly available datasets and competitive baselines, demonstrating the superiority and generalization ability of the proposed few shot learning pipelines.
Few Shot Speaker Recognition using Deep Neural Networks
Prashant Anand, Ajeet Kumar Singh, Siddharth Srivastava, Brejesh Lall
Indian Institute of Technology Delhi, India
Centre for Development of Advanced Computing, Noida
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com
Speaker recognition with only a few and short utterances is a challenging problem. In this paper, we assume that for each speaker, only a few very short utterances are available. Specifically, we learn from to utterances of seconds each for each speaker unlike earlier work where recordings upto seconds are considered as a short utterance. Apart from computational and technical improvisations obtained by solving under such constraints, this setting also allows enrolment of speakers to be easier and more practical.
Recently many approaches have been proposed for speaker identification using deep neural networks [1, 2, 3] but they assume availability of large amount of training data. Moreover, the deep learning pipelines attempting to learn from short utterances  are, in general, based on i-vectors or Mel-frequency cepstral coefficients (MFCCs). While MFCCs are known for susceptibility to noise , the performance of i-vectors tend to suffer in case of short utterances . Moreover, it has been shown that convolutional neural networks (CNNs) are able to mitigate the noise susceptibility of i-vectors, MFCCs  and have been successfully used for speaker recognition . Since convolutional neural networks are data-hungry, and are able to exploit structured information such as images very effectively, recently large scale speaker recognition datasets have been made publicly available where benchmarks setup on CNNs with spectrogram as input have been shown to perform very well [9, 10]. However, their effectiveness to learn or generalize with limited amount of data (few-shots) and short utterances is not established very well.
Few shot learning paradigms have recently been effectively applied for audio processing [11, 12]. However, their effectiveness to speech processing especially speaker recognition is still unknown. To this end, we utilize CNN as base network with spectrogram derived directly from raw audio files as input and evaluate the effectiveness of these networks in case of constrained setting of few shot learning for speaker identification. We choose VGGNet  and ResNet  as the base architectures and evaluate them under various settings. For generalizing them under unseen speakers, we use prototypical loss .
While CNNs have shown to perform very well, but they are not able to exploit the spatial relationships within an image. Bae et al  argued that CNNs are not able to leverage the spatial information within spectrograms such as between pitch and formant. Therefore, they utilized capsule networks  for speech command recognition. Based on their work, we argue that exploiting spatial relationships in spectrogram images can lead to better speaker recognition as well. However, their are two problems in applying capsule network for speaker recognition. First is that their applicability to complex data is yet to be established  and hence the generalization ability. Second, they are extremely computationally intensive. We reduce the computational complexity by dropping the reconstruction loss from the default capsule network. Next, we add an auto-encoder to map the class feature vectors from capsule network to a common embedding space. The projected feature vector from the embedding space are then subjected to prototypical loss to learn from the constrained data. The entire pipeline is trained end-to-end.
In view of the above, following are the contributions of this paper
To the best of our knowledge, this is the first work that poses speaker recognition as a few-shot learning problem and applies convolutional neural networks and capsule network for speaker recognition under the constraints of short and limited number of utterances.
We show that using convolutional neural network having spectrogram as input and prototypical loss, a speaker can be identified with high confidence with only a few training samples and very short utterances ( seconds).
We propose a novel network based on Capsule Network by significantly reducing the number of parameters and learning a class embedding using auto-encoder with a prototypical loss for generalizing capsule network to unseen speakers. We show that the proposed method performs better than VGG with equivalent number parameters while lags behind ResNet having significantly higher number of parameters.
We perform exhaustive experiments on publicly available datasets and analyze the performance of the considered networks under various settings.
Figure 1 shows the pipeline for using spectrogram with deep neural networks. The input of the network is a spectrogram obtained from raw audio file. For few shot learning, the network may be pre-trained with a large dataset. Now the task is to classify new speakers for whom, we have limited number of samples. Here we additionally, pose a constraint that along with limited number of samples, the duration of each sample is limited to seconds only. For classification, the existing network is fine-tuned with the training samples of the new speakers. However, as demonstrated in experiments, directly using the embeddings obtained from a pre-trained network causes significant drop in the performance. Therefore, we propose to use prototypical loss to optimize the embeddings by forming representative prototypes (Section 2.3) .
We have evaluated against two types of networks viz. Convolutional Neural Networks (VGG, ResNet) and Capsule Networks. While extending CNNs for prototypical loss is straight-forward as the provide class agnostic feature embeddings and hence the prototypes can be learned directly from the feature vectors. However, it is not the case with Capsule Networks as they learn class specific embeddings. Therefore, we learn a generalized embedding using an auto-encoder prior to using prototypical loss (Section 2.4). We now describe each component in detail.
2.1 Spectrogram Construction
First we convert all audio to single-channel, -bit streams at a kHz sampling rate for consistency. The spectrograms are then generated by sliding window protocol by using a hamming window. The width of the hamming window is ms with a step size of ms. This gives spectrograms of size (number of fft features) x for seconds of randomly sampled speech for each audio. Subsequently, each frequency bin is normalized (mean, variance). the spectrograms are constructed from the raw audio input i.e. no pre-processing such as silence removal etc. is performed.
2.2.1 Capsule Network
Capsule network proposed by Hinton  replaces the scalar output neuron present in Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs) with a capsule (a vector output neuron). This enables capsule network to capture pose information and contain it in a vector. The network is trained by ”dynamic routing between capsules” algorithm. Capsule Network has a convolutional layer with stride , on a primary capsule layer and a dense capsule layer. The convolution operation in primary capsule layer has stride . Our modified capusle network, CapsuleNet-M has sride in both - the first convolution layer and primary capsule layer. Capsule Network is trained with margin loss shown below.
Here, is the margin loss of class , is a final output capsule in class , iff the target class is , , and .
2.2.2 VGG-M and Resnet-34
VGG and Resnet are Deep CNN architecture which have very good classification performance on image data, so, we chose to use these models with slight modifications to adapt to spectrogram input. These modeified VGG and Resnet-34 models were introduced in  and  respectively. The architectures of VGG-M amd Resnet-34 used are specified in Table 1 and 2 respectively.
2.3 Few shot learning using Prototypical Loss
In few shot learning, at test time we have to classify test samples among new classes given very few labeled examples of each class. We are provided with dimensional embeddings for each input and its corresponding label where . The objective is to compute a dimensional representation i.e. the prototype of each class . The embedding is computed via a function where indicates a deep neural network and indicates its parameters. The prototype is the mean of the support points for a class.
By using a distance function , a distrbution over classes is learned for a query point , and is given as
At train time, we minimize negative log probability of the positive class. The training data is generated by randomly selecting a smaller subset from the training set. Then from each class, we choose a subset of samples which are considered as support points while the rest are considered as query points. The flow diagram for few shot learning is shown in Fig. 1.
2.4 Projection of Capsule Net class vectors into embedding space for few shot learning
A problem in extending CapsuleNet to few shot recognition is that the final layer learns class specific embeddings. This prevents using pre-trained capsule networks or fine tuning them for different number of classes. In order to overcome this, we append an autoencoder to capsule network which takes as input the concatenated class vectors from capsule net and learns an embedding of these class vectors. To adapt it to few shot recognition, we apply the prototypical loss to these embeddings. The block diagram is shown in Figure 2.
The intuition behind using an auto-encoder is that a concatenation of class vectors represents a distribution of the input over a feature vector. However, these class vectors are obtained for specific classes. Therefore, we need an embedding which can generalize over unseen classes as well. Therefore, we choose contactive auto-encoder , as we want the embeddings to be similar for similar inputs yet discriminative enough for similar audios not belonging to the same speaker. This is essentially useful for the prototypical loss as the embeddings from the auto-encoder can be compared with a distance function assisting the formation of prototypes for classes with limited training samples.
Voxceleb1: VoxCeleb1 contains over utterances for speakers extracted from YouTube videos.
Since the dataset does not provide a standard split for few shot recognition, we follow the following methodology. From the training set, we randomly sample instances second audios for each speaker. To avoid any overlap in the training data, the sampling is performed on separate training files provided in the dataset. At test time, we randomly sample second audio from the test files.
VCTK: The VCTK Corpus includes speech data uttered by native speakers of English with various accents. VCTK corpus contains clearly read speech, while VoxCeleb has more background noise and overlapping speech. The dataset does not provide a standard train and test split. Therefore, we use train and test split. We only use a randomly sampled second audio from each split for training and evaluation purpose.
For future benchmarking and reproducibility, we will make the train and test split of the above datasets publicly available.
3.2.1 Speaker Identification with Deep Neural Network
In the first experiment, we analyze the relative performance of various deep networks when large amount of training data is available. In Table 3, we show results using vanilla networks with standard training and test data provided with VoxCeleb1 dataset (not few-shot setting). We indicate the capsule net architecture without reconstruction loss as CapsuleNet-M. As capsule net requires significant computational resources for large datasets , we test on a subset of VoxCeleb1 for comparing against VGG and RestNet. We use first and classes of Voxceleb1 dataset and use the standard train, val and test split given for these classes. We generate the spectrogram for audio files as explained in Section 2.1 and feed it into the network. It can be seen that ResNet significantly outperforms the other methods and has nearly times the number of parameters as compared to VGG-M and CapsuleNet-M. Both VGG and CapsuleNet have comparable number of parameters, however, VGG performs % better on an average. In addition to the input size of xx, we also experimented by varying number of FFT features to and , and found that while it significantly increases the number of parameters, it doesn’t lead to any significant boost in performance.
|Architecture||50 Classes||200 Classes|
|NP||Top-1 Acc||Top-5 Acc||NP||Top-1 Acc||Top-5 Acc|
|1 shot||5 shot||1 shot||5 shot|
|Architecture||NP||Top-1 Acc||Top-5 Acc|
|1 shot||5 shot||1 shot||5 shot|
3.2.2 Effect of Number of Training Samples per Class
We now study the impact of reducing the training samples on the vanilla networks. As the previous experiment was based on training with entire training data, this setting allows us to evaluate the performance of these networks when the number of samples for each speaker reduces drastically. We vary the number of training samples per class and the results are shown in Figure 3. It can be observed that with shots, the performance of all the network decreases drastically. However, CapsuleNet-M performs better than both ResNet and VGGNet. Moreover, as expected, with increasing number of samples, accuracy of all the three networks increases, with Capsule Network consistently performing better than VGG till samples. This is an interesting observation and indicates that with lesser amount of samples, it is able to better exploit the structural composition of the input spectrogram. Moreover, with nearly one-third of the number of parameters as compared to ResNet, it performs close to ResNet at and shots
3.2.3 Few Shot Learning with Prototypical Loss
The results on few shot speaker identification are shown in Table 4. CapsuleNet-MA shows the results with the architecture discussed in Section 2.4. It can be observed that with prototypical loss, the performance of all the networks increases significantly. Interestingly, all the networks provide significantly better performance than just reducing the number of training samples (Section 3.2.2). Moreover, the capsule network with autoencoder outperforms VGG while performs closer to ResNet. Moreover, ResNet provides close to % accuracy with 1-shot for speakers and nearly % with 5-shot classification for 20 speakers. This is important as it indicates that even with heavily constrained settings one can identify the speaker with high confidence.
3.2.4 Generalization Ability of Networks
To show the generalization ability of networks, first we use the trained models on classes of Voxceleb1 and fine-tune them on first 50 classes of VCTK corpus for speaker identification task. We report the accuracy in Table 5. We also used pre-trained models on Voxceleb1 for few shot learning task and use them to perform few shot classification on VCTK Corpus without fine-tuning them and report these results in Table 6. It can be observed that, with pretrained networks the method is also able to generalize entirely to an unseen class with samples collected using entirely different criteria. It can be observed that from Table 5, that by just fine tuning the networks, ResNet achieves a top- accuracy of while VGG and CapsuleNet follow similar trends with their accuracies being slightly behind ResNet. However, in case of few shot recognition (Table 6, we notice significant drop in the performance of VGG while CapsuleNet-MA performs % better than VGG on an average.
We have shown effectiveness of few shot learning approaches to speaker identification. We also demonstrated that with deep neural networks, one can identify speakers with high confidence with ResNet outperforming other techniques by a significant margin. However, number of parameters for ResNet were comparatively high. On the other hand, CapsuleNet performed better on less amount of data but applying it for few shot recognition is not trivial. Therefore, we proposed an extension of Capsule Network by learning a generalized embedding using a contractive auto-encoder. The computed embeddings are used for learning a prototype emebedding using prototypical loss. We believe that this work will accelerate use of CNN for practical implementations as well as catalyze the research on Capsule Network for making it more efficient on large scale data.
-  D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudanpur, “X-vectors: Robust dnn embeddings for speaker recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5329–5333.
-  D. Snyder, D. Garcia-Romero, D. Povey, and S. Khudanpur, “Deep neural network embeddings for text-independent speaker verification.” in Interspeech, 2017, pp. 999–1003.
-  Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren, “A novel scheme for speaker recognition using a phonetically-aware deep neural network,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014, pp. 1695–1699.
-  J. Guo, N. Xu, K. Qian, Y. Shi, K. Xu, Y. Wu, and A. Alwan, “Deep neural network based i-vector mapping for speaker verification using short utterances,” Speech Communication, vol. 105, pp. 92–102, 2018.
-  X. Zhao and D. Wang, “Analyzing noise robustness of mfcc and gfcc features in speaker identification,” in 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013, pp. 7204–7208.
-  A. Kanagasundaram, R. Vogt, D. B. Dean, S. Sridharan, and M. W. Mason, “I-vector based speaker recognition on short utterances,” in Proceedings of the 12th Annual Conference of the International Speech Communication Association. International Speech Communication Association (ISCA), 2011, pp. 2341–2344.
-  M. McLaren, Y. Lei, N. Scheffer, and L. Ferrer, “Application of convolutional neural networks to speaker recognition in noisy conditions,” in Fifteenth Annual Conference of the International Speech Communication Association, 2014.
-  Z. Liu, Z. Wu, T. Li, J. Li, and C. Shen, “Gmm and cnn hybrid method for short utterance speaker recognition,” IEEE Transactions on Industrial Informatics, vol. 14, no. 7, pp. 3244–3252, 2018.
-  A. Nagrani, J. S. Chung, and A. Zisserman, “Voxceleb: a large-scale speaker identification dataset,” in INTERSPEECH, 2017.
-  J. S. Chung, A. Nagrani, and A. Zisserman, “Voxceleb2: Deep speaker recognition,” in INTERSPEECH, 2018.
-  S.-Y. Chou, K.-H. Cheng, J.-S. R. Jang, and Y.-H. Yang, “Learning to match transient sound events using attentional similarity for few-shot sound recognition,” arXiv preprint arXiv:1812.01269, 2018.
-  S. Arik, J. Chen, K. Peng, W. Ping, and Y. Zhou, “Neural voice cloning with a few samples,” in Advances in Neural Information Processing Systems, 2018, pp. 10 019–10 029.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” CoRR, vol. abs/1512.03385, 2015.
-  J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 4077–4087.
-  J. Bae and D.-S. Kim, “End-to-end speech command recognition with capsule network,” Proc. Interspeech 2018, pp. 776–780, 2018.
-  S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” in Advances in neural information processing systems, 2017, pp. 3856–3866.
-  E. Xi, S. Bing, and Y. Jin, “Capsule network performance on complex data,” arXiv preprint arXiv:1712.03480, 2017.
-  S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive auto-encoders: Explicit invariance during feature extraction,” in Proceedings of the 28th International Conference on International Conference on Machine Learning. Omnipress, 2011, pp. 833–840.
-  C. Veaux, J. Yamagishi, and K. MacDonald, “Superseded - cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit,” 2016.
-  R. Mukhometzianov and J. Carrillo, “Capsnet comparative performance evaluation for image classification,” arXiv preprint arXiv:1805.11195, 2018.