A Robust Diarization System for Measuring Dominance in Peer-Led Team Learning Groups

A Robust Diarization System for Measuring Dominance in Peer-Led Team Learning Groups


Peer-Led Team Learning (PLTL) is a structured learning model where a team leader is appointed to facilitate collaborative problem solving among students for Science, Technology, Engineering and Mathematics (STEM) courses. This paper presents an informed HMM-based speaker diarization system. The minimum duration of short conversational-turns and number of participating students were fed as side information to the HMM system. A modified form of Bayesian Information Criterion (BIC) was used for iterative merging and re-segmentation. Finally, we used the diarization output to compute a novel dominance score based on unsupervised acoustic analysis.

A Robust Diarization System for Measuring Dominance in Peer-Led Team Learning Groups

Harishchandra Dubeythanks: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by the authors or by the respective copyright holders. The original citation of this paper is: H. Dubey, A. Sangwan, J. H.L. Hansen, A Robust Diarization System For Measuring Dominance in Peer-Led Team Learning Groups, IEEE Workshop on Spoken Language Technology 2016, December, 2016, San Diego, California, USA., Abhijeet Sangwan, John H. L. Hansen+thanks: +This project was funded in part by AFRL under contract FA8750-15-1-0205 and partially by the University of Texas at Dallas from the Distinguished University Chair in Telecommunications Engineering held by J. H. L. Hansen.
Center for Robust Speech Systems, Eric Jonsson School of Engineering
The University of Texas at Dallas, Richardson, TX 75080, USA
{harishchandra.dubey, abhijeet.sangwan, john.hansen}@utdallas.edu

Index Terms—  Bottleneck features, Denoising Autoencoders, Dominance Score, Peer-Led Team Learning, Robust Speaker Diarization.

1 Introduction

Peer-Led Team Learning (PLTL) is a strategy where student groups collaboratively solve problems for a given course. Such a session is usually coordinated by a peer leader, who has taken and passed the course in earlier semesters. PLTL has been adopted for various undergraduate Science, Technology, Engineering and Mathematics (STEM) courses, where it has shown positives outcomes towards learning [1]. The traditional teaching model lacks one-to-one interaction and peer-feedback unlike PLTL. Peer leaders are also expected to give helpful hints and comments during students’ discussion. Peer leaders are not supposed to reveal solutions, in contrast to the traditional teaching model [2].

Analysis of PLTL team behavior using spoken language technology could also identify best practices in terms of team composition, early intervention, impact of various team parameters on outcome, etc.. For example, PLTL recordings could help in identifying students who are experiencing difficulty in learning a subject early on in the process. In the most general sense, PLTL is a small group meeting periodically and working towards a focused goal. Hence, various aspects of the group, such as team behavior, cohesion, productivity, sentiment, etc. are interesting topics to study.

Fig. 1: The proposed system consisting of six stages: Speech Activity Detection (SAD), Feature Extraction (MFCC), Mean and Variance normalization of features and splicing of features using 5 frames of context from past and future, stacked Denoising Autoencoder (DAE)-based dimension reduction, Informed HMM-based diarization, and unsupervised estimation of dominance score.

Particularly, this study makes the following contributions in audio-based analysis of PLTL groups: 1) We propose a feature engineering technique that combines audio features from multiple audio streams. The new method uses stacked denoising autoencoders (DAE) for non-linear dimension reduction of spliced MFCC features from multiple audio-streams; 2) We also propose an informed HMM-based diarization system that accomplishes diarization via unsupervised joint segmentation and clustering; 3) A new method for estimating Dominance Score (DS) using unsupervised acoustic analysis; 4) A new technique for speaker energy computation using Wavelet Packet Energy (WPE); 5) The proposed methods were evaluated on PLTL sessions extracted from the CRSS-PLTL corpus [3].

2 Proposed System

In this section, we discuss the proposed diarization system that consists of stacked denoising autoencoders (DAE) for dimension reduction, and informed Hidden Markov Model (HMM) for joint segmentation and clustering. In the first step, we removed the non-speech (NS) frames from the audio signal, followed by extraction of MFCC features. The features are mean and variance normalized followed by time splicing using 5 context-frames from past and future. A stacked DAE system is then used to reduce the feature dimension using a bottleneck architecture (BNF) [4]. Next, the HMM system uses BNF along with two dimensions of side information, i.e., number of speakers and minimum duration of speaker-turns. Hence, we call the system as informed HMM system. The iterative diarization procedure has three steps: (i) initial segmentation, (ii) merging, and (iii) re-estimation. It is discussed in detail later in Section 2.2. The CRSS-PLTL Corpus used for evaluation of proposed algorithms was introduced in our earlier work [3].

2.1 Bottleneck Features: Denoising Autoencoder (DAE)-based Dimension Reduction

We used 13-dimensional Mel-Frequency Cepstral Coefficients (MFCC) for extracting features from each frame. The parameters of the proposed system are given in Table 1. The MFCC features were first mean-and-variance normalized. Since all the channel were delayed and scaled versions of the same speech signal at a given frame, we concatenated the normalized MFCC features from each channel (7) to form a feature super-vector (91=7*13 dimensional). Next, we used splicing for context of 5 past and 5 future frames. Thus, the spliced feature dimension become 1001(=91*11). The splicing incorporates the long-term context leading to a better representation of multi-stream speech data.

Denoising Autoencoders (DAE) were found useful in the dimension reduction tasks [5]. DAE is trained in a way that allows it to learn low-dimensional hidden representation of the data such that, taking noisy input, it could reconstruct the input. The spliced features were corrupted with additive random noise before feeding it into the stacked DAE and it was trained to minimize the reconstruction error with respect to original input. The high dimensional spliced features (1001) necessitated the dimension reduction by taking the bottleneck features (BNF) from a stacked DAE. The parameters of the stacked DAE used for dimension reduction are given in Table 1. Several denoising autoencoders (DAE) were stacked to form a 5 layered deep network. We used the PDNN toolkit [6] with corruption parameter 0.2 and learning rate, momentum factor parameters of 0.01 and 0.05 respectively for our system.

2.2 Informed HMM-based Diarization

The diarization for PLTL sessions is different with respect to information available such as speaker count and turn statistics. The rapid short-turns, overlapped speech, and huge amount of reverberation and noise make the task challenging. Most of the diarization system studied did not address such challenges [3]. HMMs had been used in previous studies for various audio segmentation tasks in varied forms [7, 8, 9]. However, using side information, application to PLTL session, and using stacked DAE-based BNF are novel contributions of this paper with respect to diarization. Initially, we performed over-segmentation by dividing speech into segments where is 3 to 6 times the expected number of speakers. A HMM with states is assumed for initial segments. Each HMM state has an output probability density function (PDF) that was modeled by component Gaussian Mixture Model (GMM). Each state of HMM was allowed to have sub-states to incorporate the minimum duration constraint. All sub-states of a given HMM state (hypothesized speaker cluster) share the GMM corresponding to their state. The HMM system was trained using the Expectation-Maximization (EM) algorithm. Once HMM was trained, we obtained the Viterbi path for all frames. Next, we used the Viterbi path for checking the binary merging hypothesis based on modified algorithm [3]. After the merge iteration finished, a new HMM with fewer states was trained. The whole process was repeated again until it converged based on two conditions. The first condition is to stop merging once the number of HMM states is equal to the number of speakers, and second one is to get no improvements in likelihoods ratio upon merging.

We performed merging based on algorithm that is a variant of BIC and eliminates the need of a threshold (penalty term). This trick was first developed to improve the speaker change detection as compared to BIC [10]. In this paper, we use the same modelling techniques for a slightly different binary hypothesis to decide merging of two over-segmented segments or equivalently two HMM states. There are some modifications to algorithm applied for merging most-similar segments (HHM states). First, the minimum duration of staying in a HMM state or segment is much lower, 0.5s to 1s owing to the rapid short conversational-turns. The initial segments were modeled with a Gaussian Mixture with only components. After merging two initial segments, each modeled with components, the merged segment is modeled with components. In this way, the number of parameters of the GMM model for merged segment is same as the sum of number of parameters in child segments. As a result of keeping the number of parameters the same at each merging step, we eliminated the BIC penalty term. Once the merging is done, the new HMM of smaller size is estimated where the GMM for each state is re-estimated using the EM algorithm. The acoustic features belonging to that HMM state (speaker) were used to re-estimate the corresponding GMM.

2.3 Speaker Energy Using Wavelet Packet Decomposition

Earlier we had used formant energy for computing the speaker energy [3]. Formant energy was robust to the noise and distortions as compared to energy computed using short-time spectrum, at the expense of huge computational requirement. In this paper, we employed wavelet packet decomposition (WPD) for estimating the speaker energy. The wavelet packets (WPs) provide good time-frequency resolution at reasonable computational load [11]. There are several computationally simple methods for estimating WPs. We added the squared WP coefficient corresponding to the frequency range [50, 2000] Hz for capturing the speech intensity while ignoring the spurious background artifacts and noise. We used Symlets6 (sym6) wavelet with 6 levels of decomposition for computing the speaker energy.

3 Measuring Dominance in a PLTL Session

Dominance is a fundamental aspect of interactions in a PLTL session or small-group meeting. Authors measured the dominance in meeting using speaker diarization techniques [12]. A supervised model for dominance using short-utterances was developed in [13]. However, this model was developed and evaluated on a constrained setting that was very different from the real-life scenarios such as PLTL sessions. Authors used multi-modal features derived from audio and video streams for analyzing the dominant person in a meeting segment [14]. The speaking time of speakers was found to be correlated with perceived dominance of individuals in group settings [15]. We developed an unsupervised feature for measuring dominance. A dominance score (DS) was assigned to each student in a PLTL session by unsupervised acoustic analysis of their speech-segments. The proposed DS encapsulates the probability of a given student to be dominant in collaborative problem-solving. We considered three features derived from speech corresponding to each speaker. This information is available from the proposed informed HMM-based diarization system as shown in Figure 1. The three features are turn-taken-sum ([16], speaking-time-sum (), and speaking-energy-sum (). The turn-taken-sum () is the number of turns taken by the speaker in a given segment. A conversation turn was decided by a speech segment from the speaker cascaded between speech from other speakers and/or between speech pauses (non-speech). The speaking-time-sum () is the sum of length of time-segments (in seconds) for which the speaker was speaking. The overlapped speech was not taken into account for estimation of speaking-time-sum (). Speaking-energy-sum () is defined as the energy of all speech segments belonging to that speaker. The energy was computed using Wavelet Packet Decomposition (WPD) [11] as discussed in Section 2.3. These features are correlated among themselves. For example, a person who is taking many turns is likely to speak for longer time than others. Also, adding the speaker energy for a longer duration will give higher . After extracting all the three features, , and , we normalized each feature dimension. The mean and variance were calculated over the entire PLTL session (70 minute audio). We projected these normalized features onto eigen vector corresponding to the highest eigen value of the feature space. This was realized by principal component analysis (PCA) that combined the three features into a single feature, named feature (short for ”combined feature”). Let us denote the feature by . We divided the entire PLTL session into 5-minute segments. We computed the feature for each speaker in a given segment of a PLTL session. A dominance score was estimated for each speaker in each 5-minute segment. Let us say, is the feature corresponding to speaker. For the CRSS-PLTL corpus we have 6 to 9 speakers in a PLTL session including team leader. We define feature vector , where is the number of speakers. The dominance score (DS) for each speaker is estimated by passing the feature vector, , through a soft-max function that converts these numbers into probability scores. Thus, we have


for ; where is the dominance score of speaker. For PLTL groups in particular, it is interesting to note that the dominance score of each students is an important metric with respect to inter-session variability of that group. From the previously studied supervised dominance models that predicted only the most dominant speaker, such a comparison would not be possible [17, 18, 19].

Parameter Value
Stacked DAE input layer dim. 1001
Stacked DAE second layer dim. 91
Stacked DAE bottleneck layer dim. 21
Number of Hidden Layers 3
First Layer activation tanh
Hidden Layer activation sigmoid
Initial states in HMM 12-18
Number of GMM components 2-5
Minimum duration for HMM states 0.5s-1s
Splicing context (past) 5 frames
Splicing context (future) 5 frames
Feature type 13-MFCC
Window-length 25ms
Skip-rate 10ms
Sampling rate 8000 Hz
Table 1: The parameters set for proposed system.

4 Results & Discussions

4.1 DER Evaluation

We obtained the manual annotations for speech activity detection (SAD) and speaker diarization. The evaluation set consisted of one PLTL session with seven students. It was organized for approximately 70 minutes. We downsampled the audio data to 8 kHz before processing it. We used Diarization Error Rate (DER) for evaluating the proposed system. NIST Rich Transcription Evaluation [20] defined DER as follows:


where is the sum of duration of non-speech segments detected as speech, is the total duration of speech segments detected as non-speech, is the total duration of speech that was clustered as incorrect speakers, and is the total duration of speech from the ground-truth. The parameters of the proposed system are given in Table 1. We extracted 13-dimensional MFCC features from each of the seven parallel streams of a PLTL session. We chose a PLTL session with 7 students and hence 7 streams of audio data for evaluation. After concatenating the features from each stream we get a feature super-vector of dimensions 91 (=13*7). After splicing the feature super-vectors with 5 frames of past and future context as shown in Figure 1, we get the final dimension of features as 1001 (=11*91). The spliced feature super-vector is fed to stacked denoising autoencoder (DAE) for extracting the bottleneck features of dimension 21. A stacked DAE with three hidden layers was chosen where the middle hidden layer acts as bottleneck layer. The bottleneck (BNF) features were fed to the informed HMM-based diarization system. We used the Oracle SAD in the proposed system to validate the accuracy of the diarization system. However, we performed another case-study by formulating non-speech as an additional HMM state. We compared the diarization accuracy of BNF and raw 13-dimensional MFCC features. The concatenation of features from multi-stream was done in case of MFCC. Table 2 shows the diarization accuracy in various cases. The NO SAD case refers to not using any SAD labels and modeling non-speech as an additional HMM state. We knew that the non-speech has several distinct varieties, such as silences(with extreme noise), overlapped speech etc.. This makes it a challenging task without SAD labels. It lead to degradation in diarization accuracy (see Table 2). We can see the BNF combined with HMM is robust with respect to change in minimum duration constraints and to some extent is robust to absence of SAD labels. The state-of-the-art LIUM baseline [21] is borrowed from our earlier work for comparison [3]. We can see an absolute improvement of approximately 27% in terms of DER over the baseline LIUM system and approximately 12% improvement is due to BNF features instead of using MFCC (Oracle SAD, 1s case).

SAD DER(%) Oracle 21-DAE 1 12 2 8.05 Oracle 21-DAE 0.5 12 2 8.87 NO SAD 21-DAE 1 12 2 15.83 NO SAD 21-DAE 0.5 12 2 16.64 Oracle 13-MFCC 1 12 2 19.98 Oracle 13-MFCC 0.5 12 2 18.95 NO SAD 13-MFCC 1 12 2 33.23 NO SAD 13-MFCC 0.5 12 2 41.71 LIUM [21] 35.80

Table 2: Comparison of the DER for various parameters of the proposed system. The 13-dimensional MFCC features from each steam were concatenated for training HMM system as an additional cases for comparative study. is number of initial clusters, (s) is minimum-time HMM has to stay in each state. is the number of Gaussian used for modelling initial segments.

4.2 Dominance Score

The PLTL session was divided into segments of five minutes’ duration. We choose a PLTL session with seven students. For each five-minute segment, we compute a dominance score () for each of the seven students using unsupervised acoustic analysis. We conducted Intelligent Listening Test (ILT) for annotating each five-minute segment of the PLTL session by assigning a ground-truth dominance rating () for each student per segment. Three annotators listened to each five-minute segment and assigned a dominance rating () for each student per segment. The ground-truth dominance rating, , was a number between 1 and 5. The speakers who were present in the whole session but did not speak in the chosen segment were assigned a dominance rating, . The scores of and were assigned to the least-and most-dominant students who spoked in that segment. For students who spoke in that segment and were neither least-dominant nor most-dominant, we assigned them a between 2.25 and 4.75. It was possible to score 2.25, 2.50, 2.75, 3.0, 3.25, 3.50, 3.75, 4.0, 4.25, 4.50 and 4.75. However, no fractions other than these were used to ensure consistency in evaluations. We averaged the ground-truth rating () of all three listeners to get a final ground-truth that was used for computing the correlation with unsupervised dominance score (). Since the proposed dominance score, was derived using unsupervised acoustic analysis, we used Pearson’s correlation between ground-truth dominance rating () and proposed dominance score (). The correlation between ground-truth and proposed was 0.8748. The high correlation value validates the efficacy of proposed dominance score, , for characterizing individuals in a PLTL group.

5 Conclusions

This paper showed that the acoustic cues could help in mining meaningful analytics such as dominance from a Peer-led Team Learning (PLTL) session. We used stacked denoising autoencoder (DAE) for dimension reduction of spliced feature super-vectors obtained by concatenating features from seven streams of multi-stream PLTL data. The bottleneck (BNF) features from stacked DAE were fed to an informed HMM-based speaker diarization system. Finally, the dominance score was estimated using unsupervised acoustic analysis of each speaker segment. We evaluated the proposed system on CRSS-PLTL Corpus established in [3].


  • [1] Julia J Snyder, Jeremy D Sloane, Ryan DP Dunk, and Jason R Wiles, “Peer-led team learning helps minority students succeed,” PLoS Biol, vol. 14, no. 3, pp. e1002398, 2016.
  • [2] Mark S Cracolice and John C Deming, “Peer-led team learning,” The Science Teacher, vol. 68, no. 1, pp. 20, 2001.
  • [3] H. Dubey, L. Kaushik, A. Sangwan, and John H. L. Hansen, “A speaker diarization system for studying peer-led team learning groups,” in INTERSPEECH, 2016.
  • [4] Jonas Gehring, Yajie Miao, Florian Metze, and Alex Waibel, “Extracting deep bottleneck features using stacked auto-encoders,” in IEEE ICASSP, 2013.
  • [5] Yasi Wang, Hongxun Yao, and Sicheng Zhao, “Auto-encoder based dimensionality reduction,” Neurocomputing, vol. 184, pp. 232–242, 2016.
  • [6] Y Miao, “Pdnn: A python toolkit for deep learning,” URL https://www.cs.cmu.edu/ ymiao/pdnntk.html.
  • [7] Margarita Kotti, Vassiliki Moschou, and Constantine Kotropoulos, “Speaker segmentation and clustering,” Signal processing, vol. 88, no. 5, pp. 1091–1124, 2008.
  • [8] Jitendra Ajmera, Herve Bourlard, and Itshak Lapidot, “Improved unknown-multiple speaker clustering using hmm,” Tech. Rep., IDIAP, 2002.
  • [9] Rongqing Huang and John HL Hansen, “Advances in unsupervised audio classification and segmentation for the broadcast news and ngsw corpora,” IEEE Trans. on Audio, Speech, and Language Processing, vol. 14, no. 3, pp. 907–919, 2006.
  • [10] Jitendra Ajmera, Iain McCowan, and Hervé Bourlard, “Robust speaker change detection,” IEEE Signal Processing Letters, vol. 11, no. 8, pp. 649–651, 2004.
  • [11] Mladen Victor Wickerhauser, “Lectures on wavelet packet algorithms,” in Lecture notes, INRIA, 1991.
  • [12] Hayley Hung, Yan Huang, Gerald Friedland, and Daniel Gatica-Perez, “Estimating the dominant person in multi-party conversations using speaker diarization strategies,” in IEEE ICASSP, 2008.
  • [13] Sumit Basu, Tanzeem Choudhury, Brian Clarkson, Alex Pentland, et al., “Learning human interactions with the influence model,” NIPS, 2001.
  • [14] Hayley Hung, Dinesh Babu Jayagopi, Chuohao Yeo, Gerald Friedland, Sileye O Ba, Jean-Marc Odobez, Kannan Ramchandran, Nikki Mirghafori, and Daniel Gatica-Perez, “Using audio and video features to classify the most dominant person in a group meeting,” 2007, number LIDIAP-CONF-2007-016.
  • [15] Marianne Schmid Mast, “Dominance as expressed and inferred through speaking time,” Human Communication Research, vol. 28, no. 3, pp. 420–450, 2002.
  • [16] Janine Larrue and Alain Trognon, “Organization of turn-taking and mechanisms for turn-taking repairs in a chaired meeting,” Journal of Pragmatics, vol. 19, no. 2, pp. 177–196, 1993.
  • [17] Dinesh Babu Jayagopi, Hayley Hung, Chuohao Yeo, and Daniel Gatica-Perez, “Modeling dominance in group conversations using nonverbal activity cues,” IEEE Trans. on Audio, Speech, and Language Processing, vol. 17, no. 3, pp. 501–513, 2009.
  • [18] Rongqing Huang and John H. L. Hansen, “Advances in unsupervised audio classification and segmentation for the broadcast news and ngsw corpora,” IEEE Trans. on Audio, Speech, and Language Processing, vol. 14, no. 3, pp. 907–919, 2006.
  • [19] Hayley Hung and Daniel Gatica-Perez, “Estimating cohesion in small groups using audio-visual nonverbal behavior,” IEEE Trans. on Multimedia, vol. 12, no. 6, pp. 563–575, 2010.
  • [20] NIST, “Rich transcription 2004 spring meeting recognition evaluation plan,” http://www.nist.gov/speech/,2004, 2004.
  • [21] Sylvain Meignier and Teva Merlin, “Lium spkdiarization: an open source toolkit for diarization,” in CMU SPUD Workshop, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description