Unsupervised Discovery of Structured Acoustic Tokens with Applications to Spoken Term Detection

Unsupervised Discovery of Structured Acoustic Tokens with Applications to Spoken Term Detection

Cheng-Tao Chung and Lin-Shan Lee, 
Abstract

In this paper, we compare two paradigms for unsupervised discovery of structured acoustic tokens directly from speech corpora without any human annotation. The Multi-granular Paradigm seeks to capture all available information in the corpora with multiple sets of tokens for different model granularities. The Hierarchical Paradigm attempts to jointly learn several levels of signal representations in a hierarchical structure. The two paradigms are unified within a theoretical framework in this paper. Query-by-Example Spoken Term Detection (QbE-STD) experiments on the QUESST dataset of MediaEval 2015 verifies the competitiveness of the acoustic tokens. The Enhanced Relevance Score (ERS) proposed in this work improves both paradigms for the task of QbE-STD. We also list results on the ABX evaluation task of the Zero Resource Challenge 2015 for comparison of the Paradigms.

zero resource, spoken term detection, unsupervised term discovery, automatic speech recognition

I Introduction

Current speech recognition technology is dominated by supervised learning paradigms that rely on massive quantities of human generated linguistic labels. Acquiring such human labeled audio is expensive, and it would be hard to scale such efforts with the ever-growing quantity of audio content over the Internet. Generating such labeled corpora also requires the knowledge about the language, including its phoneme set and pronunciation lexicon. This becomes even more difficult when multiple languages are mixed in the same corpora. This is why researchers have begun to investigate semi-supervised and unsupervised paradigms that attempt to train acoustic models with limited labeled audio in the low-resource scenario [1], and no labeled audio in the zero-resource scenario [2, 3, 4, 5, 6, 7, 8]. In this paper we focus our discussion on the unsupervised paradigm where the machine has to learn token-like representations in the speech signal directly from the unannotated corpora.

The two most popular forms of speech signal representation are either a sequence of real-valued frame-level feature vectors (like Mel-Frequency Cepstral Coefficients (MFCC) or spectrogram), or a sequence of discrete tokens (like words or phonemes). Likewise, the works on unsupervised speech technologies extracted either frame-level features [9, 10, 11, 12, 13, 14, 15, 16] or discrete tokens [17, 18, 19, 20, 21, 22, 23, 24] out of an unlabeled corpus. To learn unsupervised frame-level representations, Zhang and Glass [25] used posteriorgram features from unsupervised GMM universal background model (UBM), Chen et al. [26] used posteriorgrams from a non-parameteric infinite GMM, Kamper et al. [27] proposed the correspondence autoencoder (cAE): an AE-like deep NN that incorporates top-down constraints by using aligned frames from discovered words as input-output pairs, and we extracted Bottleneck Features (BNF) trained from DNN trained with unsupervised HMMs as targets in our previous work [28]. To learn discrete tokens, Siu et al. [4] used iterative re-estimation and unsupervised decoding procedure of traditional HMMs, Lee and Glass used the non-parameteric Bayesian HMM [5], Kamper et al. [8] used embedded segmental K-means models, and we used traditional HMMs with additional constraints in our previous works [21, 22, 23].

In this work111This work was sponsored by the Ministry of Science and Technology, R.O.C., we organize our previous contributions on discrete token learning with unsupervised HMMs, into two paradigms for HMM training. The Multi-granular Paradigm used the multiple acoustic token sets scattered over a granularity space [22, 23], and can be applied to semi-supervised speaker adaptation [7]. The Hierarchical Paradigm used the two-level word-like and subword-like tokens [21, 28], and can be used in query expansion of semantic retrieval systems for spoken documents [29, 24]. Both of them produced structured acoustic tokens, rather than a single set of acoustic tokens. We unify the paradigms with one framework, and compare the two paradigms on the same experiments.

Query-by-Example Spoken Term Detection (QbE-STD) was chosen as the example application to compare tokens trained with the two paradigms on. QbE-STD refers to the task of finding all occurrences of the input spoken query from a large target audio corpus. Most QbE-STD approaches were based on automatic speech recognition (ASR), transforming speech into words or subwords for token matching [30, 31, 32, 33], with performance relying heavily on the ASR accuracy [34]. This implies annotated training corpora properly matched to the spoken content are necessary. Because both the input query and the target corpus are spoken, it is possible to directly match the spoken query to the target corpus without knowing the written form of either the query and the target corpus, bypassing the need for supervised speech recognition. This is especially attractive for languages with limited annotated data [35, 36] or spoken content with unknown languages. There are in general two approaches to compute the distance between a spoken query and a spoken document (an utterance in the target audio corpus): comparing the acoustic feature sequences directly, or transcribing audio files into sequences of acoustic tokens, then comparing the transcribed token sequences. The former approach is easily affected by speaker mismatch and varying acoustic conditions. The high computation cost also makes it hard to scale for larger spoken corpora [2, 3, 37, 38, 39, 40, 41, 18]. The latter approach smoothens signal variations by token models and has the advantage of much lower on-line computation requirement when the target corpus is large. The methods proposed in this paper belongs to the latter approach.

The rest of the work is organized as follows. The Multi-granular Paradigm is explained in Section II-A and the Hierarchical Paradigm is explained in Section II-B. We unify the two paradigms with theoretical analysis in Section III, and reason that the Hierarchical Paradigm can be reduced to the Multi-granular Paradigm when certain criteria are met. The Query-by-Example Spoken Term Detection (QbE-STD) using the tokens is explained in Section IV. The setting of either training the tokens on the documents or the queries for the QbE-STD experiments are explained in Section V. The QbE-STD experiments on the QUESST dataset [42] of MediaEval 2015 in Section VI supports our framework and verified the competitiveness of the acoustic tokens. Additional results on the Zero Resource Challenge ABX evaluation task [43] also resented in Section VII. Finally, we explain how to choose which Paradigm to use in Section VIII and give our concluding remarks in Section IX.

Ii The Paradigms

Given an unlabeled speech corpus, it is straightforward to discover acoustic tokens using unsupervised HMMs with a chosen configuration. We assume the number of states in each HMM, and the total number of distinct acoustic tokens is the same for all HMMs in the HMM set we consider during initialization, with used to represent the model configuration. In each iteration , we then train the HMM parameters with the label sequences obtained in the previous iteration as in Eq. (1) and decode the label sequences with the obtained parameters as in Eq. (2) [20].

(1)
(2)

where is the acoustic vector sequence for the whole corpus being considered. Eq. (1) is the maximum likelihood training of HMMs, and the only difference here is that we train on the label sequences obtained in the previous iteration rather than ground truth labels. Eq. (2) is the Viterbi decoding based on the model set . This method to train HMMs was introduced by Gish et al. [20]. To find a better initial label, we used the method explained in [21] to obtain . To model more complex structures in speech, we proposed two natural ways to extend this method: (1) Use multiple sets of HMMs with different configurations [22] which we call the Multi-granular Paradigm in this work, (2) Combine the HMMs into longer tokens to construct language structures [21] which we call the Hierarchical Paradigm in this work. The two Paradigms are explained below.

Ii-a The Multi-granular Paradigm

In this paradigm, we take into consideration the cases where several intrinsic acoustic representations exist in the spoken corpus. Acoustic units with different lengths such as phonemes, syllables, words, and phrases have different temporal granularity. Different phonetic clusters such as speaker-independent phonemes, gender-dependent phonemes, speaker-dependent phonemes have different phonetic granularity. We wish to develop a set of acoustic token HMMs for every different granularity configuration (temporal and phonetic) on the same corpus. With multiple sets of granularity configurations, we can have multiple sets of acoustic token HMMs trained on the same corpus. We call the level of representation corresponding to a granularity configuration a layer in the discussion below.

Fig. 1: Model granularity space for acoustic token configurations

Here each layer of acoustic token is defined by a set of temporal and phonetic granularity parameters. Different layers of tokens are discovered independently of each other in the Multi-granular Paradigm. The transcription of a signal decoded with these tokens can be considered as a temporal segmentation of the signal, so the HMM length (or number of states in each HMM) represents the temporal granularity. The set of all distinct acoustic tokens can be considered as a segmentation of the phonetic space, so the total number of distinct acoustic tokens represents the phonetic granularity. This gives a two-dimensional representation of the acoustic token configurations in terms of temporal and phonetic granularities as in Fig. 1. Any point in this two-dimensional space in Fig. 1 corresponds to an acoustic token configuration. Note that there can be a third dimension, the acoustic granularity which is the number of Gaussians in each HMM state, but the effect of that dimension has been shown to be negligible in the experiments, thus we simply set the number of Gaussians in each state to be 4. Although the selection of the hyperparameters can be arbitrary in this two-dimensional space in principle, here we simply select temporal granularities and phonetic granularities, forming a two-dimensional array of hyperparameter sets in the granularity space and sets of acoustic tokens. We denote the collection of selected granularities to be , as shown in Eq. (3).

(3)

Note that the optimal granularities can be dependent on the spoken corpus and there is no emphasis on discovering acoustic tokens close to linguistically meaningful units for the Multi-granular Paradigm. Tokens at each granularity may not correspond to any of these linguistically meaningful units, but with multiple sets of tokens, most of the structures and characteristics of the corpus can be captured if the granularities are evenly distributed over the granularity space.

Ii-B The Hierarchical Paradigm

The Hierarchical Paradigm, on the other hand, automatically learns the word-like and subword-like tokens, as well as all relevant knowledge for this set of tokens such as the lexicon and N-gram language model referred to as the linguistic structure for the spoken language of the given corpus. The goal is to find the parameters for the linguistic structure, and the word-like token label sequences on the observed acoustic feature vector sequences for the corpus considered which was discussed in [21]. In this work we leave out the language models in the discussion to compare the Paradigms, so the parameter set includes two parts: for acoustic HMMs of subword-like tokens, for the lexicon of word-like tokens in terms of subword-like token sequences. denotes the number of states of the longest word-like tokens in the lexicon, and denotes the number of words in the lexicon. The granularity parameters are chosen, is set to an integer multiple of , and is inferred. Contrary to the Multi-granular Paradigm where acoustic tokens with multiple granularities are trained independently, in the Hierarchical Paradigm the subword-like tokens and word-like tokens are like two correlated granularities and that are jointly trained. The iterations in Eq. (1) and Eq. (2) can be further segmented into several cascaded stages that uses slightly different objectives for acoustic, linguistic and lexical optimization. Two of these stages are summarized below, and more details of such stages can be found in our previous work [21]. When the difference between and becomes insignificant, the process then advances to the next stage. The basic idea behind the procedure of having multiple stages is to gradually construct and update the parameters from subword-like tokens to word-like tokens. This prevents the parameters from being caught in local optimal situations which often happen when too many parameters are optimized at the same time. The general flow of the training procedure is as follows.

In the acoustic optimization stage, is fixed and the HMM parameters for the subword-like tokens are trained alone, because these HMMs are the primary building blocks of the whole linguistic structure and reliable estimate for their parameters is the key. In each iteration, the acoustic model set are the HMMs trained from the corpus based on with the ML criterion as in Eq. (1). The lexicon is derived by collecting all word-like tokens appearing in with counts exceeding a threshold. Free word decoding is then performed on the whole corpus based on and , producing an updated label . When is updated to , not only the HMM parameters of and HMM segmentation boundaries are updated, but the vocabulary size of may shrink when the counts of some word-like tokens become small enough.

In the lexical optimization stage, we then break the word-like tokens into subword-like tokens and reconstruct better word-like tokens in the lexicon. In each iteration, we reconstruct new word-like tokens by breaking the existing word-like tokens in into subword-like tokens, and then reconstructing new word-like tokens based on their occurrence in . Those segments of several consecutive subword-like tokens appearing frequent enough and with high enough right and left context variation are taken as new word-like tokens. This can be realized by constructing an efficient data structure called PAT-Tree using the labels [44]. In this way, the lexicon can be updated significantly in each iteration. With the PAT-Tree, word-like tokens consisting of a single subword-like token are often included into the lexicon. If a word is not in the lexicon, it is usually represented by a sequence of subword-like tokens. This updated lexicon is then used in free-word decoding to produce the labels . The whole process is completed when there is no significant difference between and . This gives the automatically discovered linguistic structure . The time alignment for the subword-like tokens are updated in all iterations when the labels are decoded.

Note that the word-like and subword-like tokens here try to mimic the linguistically meaningful words and subword units, but for some arbitrary granularity , the results may not be linguistically meaningful at all (close to phonemes, words, phrases, etc.). However, if the granularities are properly chosen such that the discovered subword-like tokens are close to the phonemes of the language, then discovered word-like tokens in the lexicon can be really close to the words of the language. The point of having word-like tokens and the lexicon is to provide higher-level context constraints which help produce the subword-like tokens with higher quality.

Iii A unified view of the paradigms

Although the Hierarchical Paradigm and Multi-granular Paradigm may look different at first glance, in this section we propose a unified view on these two paradigms. The objective of this section is to offer a guideline for selecting the granularity parameters under the Multi-granular Paradigm so that the two Paradigms can be compared in terms of their complexity. The word-like tokens under the Hierarchical Paradigm can be considered as tokens of a coarser temporal granularity under the Multi-granular Paradigm. The tokens of a finer temporal granularity under the Multi-granular Paradigm can be considered as subword-like tokens used to construct the word-like tokens under the Hierarchical Paradigm. With this approach, we can compare one paradigm to the other easily and characterize the mathematical relation between the two.

Let the total number of possible token sequences on any utterance for a token set with model parameter be denoted as . For an utterance of frames, we assume that each HMM state occupies the same number of frames . Let us denote the model parameters of a Multi-granular token set with granularity as . Under the Multi-granular Paradigm, the length of every HMM is , and the utterance is represented as a token sequence of acoustic tokens. There are possible acoustic tokens, so the number of possible token sequences on an utterance of length is

(4)

Model parameters for a token set under the Hierarchical Paradigm consist of two parts: Let , or the subword-like token acoustic model of granularity and the word-like token lexicon . Because a word-like token is composed of one to several subword-like tokens, is a positive integer multiple of . Since the lexicon limits the token sequence for any utterance to include only word-like tokens in the lexicon, the possible subword-like token sequences is a subset of the case with only the subword-like acoustic token model , hence we have

(5)

On the other hand, with the Hierarchical Paradigm we represent the utterances as sequences of word-like tokens. We have a total of allowed distinct word-like tokens and the maximum length of a word-like token is . Since not all word-like tokens are composed of the same number of subword-like tokens in the lexicon, this set of word-like tokens would have mixed temporal granularity. This means the word-like tokens would have varying number of states with the longest having states. The lower bound of the total number of token sequence representations for this situation is the case when all the word-like tokens in the lexicon have the same length of states. This number is the same as a set of subword-like tokens each with states, or for the Multi-granular Paradigm, therefore

(6)

By combining Eq. (5) and Eq. (6), we can get an upper and a lower bound:

(7)

Eq. (7) means that we can bound the number of possible token sequence representations for utterances based on a token set under the Hierarchical Paradigm using two token sets and under the Multi-granular Paradigm. This is helpful, because acoustic tokens under the Hierarchical Paradigm can be tricky to train. With Eq. (7), we can achieve similar order of representations with the Multi-granular Paradigm by selecting the granularities within the box on the granularity plane,

(8)

between the points and . This also serves as a good guideline for selecting the granularity parameters under the Multi-granular Paradigm when some knowledge about the underlying language for the spoken corpus is known: the temporal granularities can be selected ranging between those corresponding to the duration of an average phone , and of the longest word ; while the phonetic granularities can be selected ranging between the size of the phoneme inventory and the size of the vocabulary .

Recall that in the lower bound in Eq. (7), is actually the granularity parameter for a token set under the Multi-granular Paradigm, so we may rewrite it as . By substituting with , and using Eq. (4) with the relation in (7), we get

(9)

The equality in Eq. (9) becomes true when all words in the corresponding lexicon are composed of the same number of states and

(10)

When Eq. (10) holds, the number of possible representations for a token set under the Hierarchical Paradigm actually is the same as that of a token set or under the Multi-granular Paradigm (lower bound is equal to the upper bound). In other words, the two-level Hierarchical Paradigm is reduced to the one-level Multi-granular Paradigm. The simplest situation for this to happen would be when every subword-like token is a word-like token in the lexicon (,). Another case is when all word-like tokens are composed of exactly subword-tokens, and every possible combination of subword-like tokens corresponds to a word-like token in the lexicon. (,). An example would be when , and , . This explains why the number of possible representations for utterances by token sequences can be used to develop a unified view to analyze the two paradigms.

Note that exact same results in this section can be derived if we used the number of bits required to store the decoded acoustic tokens in our discussion. For example, we can derive Eq. (10) directly by equating the token storage requirements for token sequences in Table VII on two different granularities and .

Iv Spoken Term Detection Using Discovered Acoustic Tokens

There can be various applications for the acoustic tokens presented here, while Query-by-Example Spoken Term Detection (QbE-STD) is a good example. In this section we summarize the ways to perform QbE-STD [22] using the acoustic tokens. Note that due to the nature of unsupervised learning, tokens with high occurrence in the corpus would have better representations, making the system perform relatively worse for queries with low occurrence.

Iv-a Off-line Phase: Token Pairwise-Distance

Let {} denote a set of the acoustic tokens discovered here. For the Multi-granular Paradigm, is the hyperparameter for phonetic granularity in the parameter set for each set of acoustic tokens. For the Hierarchical Paradigm is the total number of subword-like tokens and is a subword-like token. We first construct a distance matrix of size off-line for these tokens, for which the element is the distance between any two token HMMs and in the set,

(11)

The KL-divergence between two token HMMs in Eq. (11) can be defined as the symmetric KL-divergence between each corresponding state in and based on the variational approximation [45] summed over the states. It is also possible to perform a state-level Dynamic Time Warping (DTW) between the two state sequence in and (i.e. one state in HMM can be matched to several states in HMM and vise versa), then sum over the optimal path. This is constructed for each token set .

Fig. 2: The transpose of the matching matrix

Iv-B On-line Phase: Relevance Score Evaluation

In the on-line phase, we first perform the following for each spoken query and each spoken document (an utterance) in the target corpus. This is done for each token set in the Multi-granular Paradigm. Assume a document is decoded into a sequence of acoustic tokens with indices and the query into a sequence of tokens with indices . We thus construct a matching matrix of size for every document-query pair, in which each entry is the relevance score between acoustic tokens with indices and as in Eq. (12) and shown in Fig. 2(a) for a simple example of and , where is defined in Eq. (11),

(12)

The above only considers the one-best token sequences and decoded from and . It is possible to consider the N-best token sequences by representing the N-best token sequences as sequences of posteriorgram features and integrate them in the matrix as shown in [22]. However, experiments showed that the extra improvements brought in this way is almost negligible in the Multi-granular Paradigm, probably because in that paradigm the different token sequences based on the different token sets can be considered as a huge lattice including many one-best paths which are jointly considered here [22].

For matching the sub-sequence of with , we take the summation of the elements in the matrix in Eq. (13) along the diagonal direction, generating the accumulated distance for all sub-sequences starting at all token positions in as shown in Fig. 2(a). The minimum is taken as the Initial Relevance Score (IRS) between document and query , on the token set as in Eq. (13).

(13)

In this work we propose that the HMM probabilities for the spoken utterances , evaluated with respect to each token and in the token sequences can also be taken into account to produce the Enhanced Relevance Score (ERS) as shown in Eq. (14).

(14)

Here and are the probabilities for observing the tokens and in and respectively. To turn the probabilities into scores, we take the logarithm of and . Although we are summing over in Eq. (14), and the sum over will be the same for a given query , we still keep the term because will be different across acoustic token sets, which matters when we combine the distances.

It is also possible to consider token-level DTW on the matrix as shown in Fig. 2(b). However, experiments have shown that the extra improvements brought in this way is almost negligible in the Multi-granular Paradigm. This is probably because in that paradigm the different token sequences based on the different token sets (e.g. including longer /shorter tokens) are jointly considered, so the different time-warped matching and insertion/deletion between and is already automatically included [22]. For the Multi-granular Paradigm, the multiple relevance scores and in Eq. (13) and Eq. (14) obtained with multiple token sets can be normalized across documents and then averaged. The averaged scores are then used to rank all the documents for QbE-STD.

Iv-C The Heuristics behind Token Sequence Representations

The task of QbE-STD tries to define some tractable relevance score between a spoken document and a spoken query to approximate the oracle relevance score which is unknown. The role of the unsupervised token sets is to map the two spoken utterances and to an intermediate representation where some tractable relevance score calculation can be performed. We map and to the intermediate representations and using the token set with parameter . Conceptually, the oracle relevance score can be approximated by the Initial Relevance Scores (IRS) evaluated directly from the token sequences,

(15)

where is defined in Eq. (13) and is evaluated with and . Because the token sequences and are imperfect representations of and , can be far from . The HMM probabilities and in Eq. (14) describe the quality of these representations. When these probabilities are high, the approximation in Eq. (15) is closer to the . The two terms and scores the quality of the representation when we approximate the utterances and with the token sequences and . In Eq. (14) we simply add the quality scores to Eq. (13) to produce the Enhanced Relevance Scores (ERS).

(16)

In the experiments below we will show that Eq. (14) worked better than Eq. (13) in all cases.

V Testing Scenarios for the Experiments

We wish to use QbE-STD as the example application to test the token sets discussed here. The task of QbE-STD involves two sets of utterances: the spoken queries and spoken documents . If we ignore the structural differences, they are simply two sets of spoken utterances.

In our previous works [21, 22, 28], it was assumed that the spoken queries were not accessible during the off-line phase. That means we assume only one spoken query was given during the on-line phase for every query search, so the retrieved results for every query was independent of each other. However because the documents and are simply two sets of utterances, in this work we also try to investigate another testing scenario where the situation is reversed. This means the entire set of spoken queries is available during the off-line phase, but the spoken documents are given one-by-one during the on-line phase. We further discuss these two scenarios below, both of which will be tested with the token sets proposed.

V-a Document Tokens

In this scenario the spoken query set is available only during testing, while the whole spoken document set is available during training of the token sets. Because the token sets are trained with , tokens trained under this scenario is referred to as document tokens. This is the scenario for most cases of STD including our earlier work [21, 22, 28], where is both the archive containing the spoken documents which we wish to retrieve from, and the archive used to train the unsupervised tokens.

V-B Query Tokens

In this scenario the spoken document set is available only during testing, while the whole spoken query set is available during training of the token sets. Because the token sets are trained with , tokens trained under this scenario is referred to as query tokens. This system has the benefit of being very fast to train since we know that the training time complexity is quadratic in the length of the training utterances (from Table VII and Appendix A), and usually queries are much shorter than documents. Because most queries are short, the quality of the system depends highly on the number of utterances in . Note that when the number of training queries in is small, this testing scenario becomes similar to DTW over raw features. Since is small, each HMM will be fed with less training examples. In the most extreme case, each HMM will be assigned only one training sequence which it sets as the mean of the states, and builds a small variance around it. Since it has been shown that HMM decoding is in fact very similar to DTW [46], when calculating the Gaussian state emission probabilities on HMMs, we are computing the Gaussian kernel between the mean of the Gaussian and the frame-level feature. The QbE-STD process would be similar to performing DTW of each query directly over each spoken document with a Gaussian kernel as a distance measure over the feature pairs.

A real world example for this scenario would be using personalized devices like mobile phones to search for spoken archives. The device has a record of spoken queries, possibly stored locally, but not the spoken documents. The user then decides to search for the queries of interest in different spoken archives that contain the documents in the cloud. Note that the user does not have to label the queries in advance.

Paradigm (m, n) (u, v) model-dependent minCnxe
(TIRS (TERS ERS(T1) ERS(T2) ERS(T3)
Multi-granular (3, 100) - 0.8008 0.7838 0.7820 0.7967 0.8128
(3, 200) - 0.7985 0.7842 0.7860 0.7942 0.8001
(5, 100) - 0.7872 0.7651 0.7722 0.7886 0.7732
(5, 200) - 0.7929 0.7671 0.7757 0.7868 0.7904
(7, 100) - 0.7938 0.7672 0.7727 0.7908 0.7885
(7, 200) - 0.7833 0.7645 0.7735 0.7797 0.7769
average - 0.7826 0.7644 0.7701 0.7770 0.7788
Hierarchical (3, 100) (6, 3338) 0.7974 0.7820 0.7845 0.8003 0.7966
(3, 200) (6, 8571) 0.7978 0.7825 0.7840 0.7987 0.7953
(5, 100) (10, 3904) 0.7914 0.7663 0.7713 0.7861 0.7613
(5, 200) (10, 9191) 0.7978 0.7729 0.7780 0.7895 0.7729
(7, 100) (14, 4119) 0.7939 0.7700 0.7749 0.7852 0.7888
(7, 200) (14, 8296) 0.7859 0.7660 0.7736 0.7798 0.7920
average - 0.7840 0.7644 0.7706 0.7805 0.7915
TABLE I: STD performance of Multi-granular and Hierarchical Document Tokens trained on the Document Corpus of QUESST 2015. denotes to the number of HMMs in the corpus, denotes the number of HMMs, denotes the number of states of the longest word-like token in the lexicon, denotes the number of word-like tokens in the lexicon. The best result for each column is shown in bold.
Paradigm (m, n) (u, v) model-dependent minCnxe
(TIRS (TERS ERS(T1) ERS(T2) ERS(T3)
Multi-granular (3, 100) - 0.7938 0.7856 0.7860 0.7987 0.8139
(3, 200) - 0.8047 0.7847 0.7794 0.8048 0.8145
(5, 100) - 0.7854 0.7804 0.7862 0.7984 0.8062
(5, 200) - 0.8010 0.7882 0.7813 0.7984 0.8117
(7, 100) - 0.7908 0.7814 0.7795 0.7990 0.8075
(7, 200) - 0.7966 0.7911 0.7807 0.8014 0.8073
average - 0.7828 0.7735 0.7747 0.7984 0.8114
Hierarchical (3, 100) (6, 582) 0.8025 0.7921 0.7819 0.8051 0.8139
(3, 200) (6, 878) 0.8022 0.7905 0.7819 0.7963 0.8051
(5, 100) (10, 379) 0.7893 0.7822 0.7817 0.8013 0.8131
(5, 200) (10, 539) 0.8036 0.7978 0.7786 0.7966 0.8103
(7, 100) (14, 320) 0.7880 0.7841 0.7822 0.8046 0.8090
(7, 200) (14, 395) 0.7950 0.7875 0.7775 0.8079 0.8065
average - 0.7923 0.7829 0.7774 0.7960 0.8026
TABLE II: STD performance of Multi-granular and Hierarchical Query Tokens trained on the Development Queries of QUESST 2015. denotes to the number of HMMs in the corpus, denotes the number of HMMs, denotes the number of states of the longest word-like token in the lexicon, denotes the number of word-like tokens in the lexicon. The best result for each column is shown in bold.
Index Methods actCnxe minCnxe
(1) Caranica et al. Romanian Phones MFCC [47] 1.0061 0.9944
(2) Caranica et al. Romanian Phones PNCC [47] 1.0061 0.9943
(3) Ma et al. SMO+iSAX[48] 0.9988 0.9872
(4) Ma et al. subseq+MFCC [48] 1.0658 0.9823
(5) Skácel et al. Posteriorgrams DTW[49] 0.8452 0.8263
(6) Skácel et al. Posteriorgrams subsequence DTW[49] 0.8447 0.8124
(7) Hou et al. Spectral, phoneme-state posterior, BNF, fusion of 66 systems [50] 0.773 0.757
(8) Proposed, Multi-granular Document Tokens Eq. (14) (7,200) 0.9997 0.9937
(9) Proposed, Multi-granular Query Tokens Eq. (14) Average 1.0022 0.9965
(10) Proposed, Hierarchical Query Tokens Eq. (14) Average 1.0020 0.9964
(11) Proposed, Multi-granular Document Tokens Eq. (14) Average 1.0015 0.9932
(12) Proposed, Hierarchical Document Tokens Eq. (14) Average 1.0013 0.9932
TABLE III: STD performance of systems submitted by Participants of QUESST 2015

Vi Spoken Term Detection Experiments

We use the dataset provided by the “Query by Example Search on Speech Task” (QUESST), held as part of the MediaEval 2015 evaluation task [42], in our spoken term detection experiments. QUESST 2015 intended to evaluate language-independent audio search systems in a low resource scenario. The QUESST 2015 dataset is composed of a set of spoken documents, and 2 sets of spoken queries. The spoken document set is composed of around 18 hours of audio (11662 files) in the following 7 languages: Albanian, Czech, English, Mandarin, Portuguese, Romanian and Slovak, with different amounts of audio per language. The spoken queries, which are relatively short (5.8 seconds long on average), were automatically extracted from longer recordings and manually checked to avoid very short or very long utterances. The QUESST 2015 dataset includes 445 development queries and 447 evaluation queries, with the number of queries per language being more or less balanced with respect to the amount of audio available in the spoken document set. Both of the two query sets contain three types of queries: the first one (T1) involves “exact matches” whereas the second one (T2) allows for inflectional variations of words or word re-orderings (that is, “approximate matches”); the third one (T3) is similar to T2, but the queries were drawn from conversational speech, thus containing strong coarticulations and some filler content between words. The data was artificially noised and reverberated with equal amounts of clean, noisy, reverberated and noisy+reverberated speech. Reverberation was obtained by passing the audio through a filter with an artificially generated room impulse response (RIR). The normalized cross entropy cost (Cnxe) [51, 52], the lower the better, was used as the primary metric for the evaluation.

Below we only report the results on the development queries since the results were similar on the evaluation queries. We trained four sets of token sets under the two proposed paradigms with the two testing scenarios (Document Tokens and Query Tokens), with different granularities. We list the model-dependent minCnxe [42], obtained with either the Initial Relevance Score (IRS) in Eq. (13) or Enhanced Relevance Score (ERS) in Eq. (14) on the different token sets for Document Tokens and Query Tokens in Table I and Table II respectively. We further show the detailed results for ERS on different query types T1, T2, T3 respectively. Results of other metrics are also available but they all showed consistent trends, therefore left out. The model-dependent minCxne was selected because it showed the most variance across the different token sets. In the top half of Tables I and II, we trained several Multi-granular token sets on the utterances with the granularities . In the bottom half of Table I, the Hierarchical tokens are trained with lexicon and language models using the method in [21]. In the bottom half of Table II, we only used the lexicon and not the language model in the Hierarchical Paradigm because the noisy acoustic conditions did not allow the initialization step to find stable word-like structures and the queries are too short for an effective language models. The number of states of the longest word-like token in the lexicon , and the number of word-like tokens in the lexicon are also shown. In training these token sets, we constrained the word-like tokens to be at most two subword-like tokens long, or . The model-dependent minCnxe results were evaluated using IRS in Eq. (13) and ERS in Eq. (14) based on the subword-like tokens with granularity , except the decoding process for generating the token sequence representations was based on the lexicon constraints for the word-like tokens.

In the last rows of the top and bottom halves of Tables I and II we averaged the relevance scores obtained at each granularity and evaluated the performance on the averaged relevance scores. From Tables I and II, several observations can be made: (a) In all cases, the results obtained with the Enhanced Relevance Scores (ERS) in Eq. (14) performed better than results obtained with the Initial Relevance Score (IRS) in Eq. (13). These results verify our analysis in Section IV-C regarding the heuristics behind token sequence representations. This is a major improvement over our previous work which only considered IRS in Eq. (13). (b) By comparing Tables II and I, the performance of some of the query tokens were comparable to the document tokens. The query tokens at granularity (5,100) in Table II even performed better than two document tokens at granularity (3,100), (3,200) in Table I for ERS. Only 445 short queries were used to train the query tokens, while 11662 long spoken documents were used to train the document tokens. With only 445 short queries we trained or token HMMs, so each HMM is given only a few training examples. The comparable performance to the document tokens which were trained on 11662 long spoken documents suggests that under noisy conditions, our analysis in Section V-B is probably correct. Training HMM tokens on very small training sets is essentially just assigning the query features to the means of the Gaussians, and decoding the HMM on the documents is really just performing DTW on the query-document pairs. (c) Eq. (10) successfully explains the trends between , , and . We constrained the word-like tokens in the lexicon to be at most two subword-like tokens long, so . When we substitute with , we have

(17)

which is the condition that the number of representations in terms of token sequences is saturated, and the two-level representation of the Hierarchical Paradigm is reduced to the one-level Multi-granular Paradigm. Although the actual values of are way smaller than the theoretical value in Eq. (17) for saturation to happen, it actually explains some trends. For the query tokens in Table II, by comparing granularities (3,100), (5,100), (7,100) and (3,200), (5,200), (7,200) with a fixed 100 or 200, we see that the actual value of decreases with the increase of , which is consistent with Eq. (17). In other words, when the number of distinct subword-like tokens is fixed, longer subword-like tokens implies smaller number of distinct word-like tokens (smaller lexicon size). This is only partially observed for the document tokens in Table I when the growth of slows down. By comparing the granularities (3,100) and (3,200), (5,100) and (5,200), (7,100) and (7,200) with a fixed 3, 5, 7 and we see that the actual value of increases with the increase of , which is also consistent with Eq. (17). In other words, when the length of subword-like tokens is fixed more distinct subword-like token implies more distinct word-like tokens (larger lexicon size). (d) Under most conditions, the Multi-granular Paradigm performed better than the Hierarchical Paradigm for both query tokens and document tokens. We believe this is because careful tuning is required when training lexicons for the Hierarchical Paradigm to be successful. (e) By comparing the performance of the averaged scores for different query types in both Tables I and II, T1T2T3. This indicating that T1 is the easiest type of query where T3 is the hardest. (f) For both document tokens and query tokens, the Hierarchical tokens managed to get the best results on most individual query types T1, T2 and T3, but not the best when all 3 query types are considered. This is probably because the Hierarchical tokens managed to capture some structure of the specific query types. Good thresholds for scores can be derived for specific query types, but maybe the range of scores is too different across query types, degrading the performance when jointly considered.

For comparison to supervised methods, we also list the results of the systems submitted by participants of the QUESST 2015 evaluation on the evaluation set in Table III. Because most teams reported their results on the model-independent actual Cnxe (actCnxe) and minimum Cnxe (minCnxe), we also report our results in model-independent actCnxe and minCnxe in Table III instead of the model-dependent minimum Cnxe in Tables I and II. We list the results of our best performing token set in row (8) of Table III, which is (7,200) of the Multi-granular document tokens in Table I. The results of the averaged relevance scores in Table I and II are also listed in rows (9), (10), (11), (12) in Table III. Note that in the QUESST 2015 evaluation the participants were allowed to use acoustic models trained on other labeled datasets, since the task did not require systems to compete under the zero resource scenario. Also note that in our experiments we always assume that either the document or query is not available during the training of the token sets, so the comparison is not entirely fair. In rows (1) and (2) Caranica et al. [47] used supervised phoneme HMM tokens trained on a labeled Romanian corpus of 8.7 hours. The results of row (1) were trained with MFCCs, and row (2) with PNCCs [53]. Their systems are similar to the proposed approach here because we both use token HMMs. Although they used supervised knowledge in their HMMs, the performance of our unsupervised HMMs in row (11) and (12) is actually better. In rows (3) and (4) Ma et al. [48] used various combinations of Czech, Hungarian, and Russian phonetic tokens and frame-based DTW systems. Their systems are similar to the proposed approaches here because we both used the fusion of various token sets. In row (3), they fused the results from various ways to the calculate distances based on the phone sequences for three different languages. In row (4), they combined the distances of the phonetic tokens with frame-based DTW. Their supervised results are comparable to ours. In rows (5) and (6) Skácel et al. performed frame-based DTW on posteriorgrams extracted from Czech, Portuguese, Russian, Spanish systems. In row (5), they stacked the posteriorgrams and performed DTW. In row (6), they split the queries into multiple segments and performed DTW on each segment then averaged the results. The rationale behind this approach of splitting the queries into segments in row (6) is because of the complications of the T2 and T3 queries. The improvement from rows (5) to (6) suggests that this action is justified. We did not develop any special approach to deal with the T2 and T3 queries, which may explain their better results. In row (7), Hou et al. fused 66 systems of spectral features, phoneme-state posterior features and bottleneck features from 3 teams. The performance of their aggregated system was the best performing by a large margin and can be considered the topline of QUESST 2015.

Considering real applications, the major advantage of unsupervised approaches as proposed here over the multilingual training (using supervised models trained on other languages) is the robustness across languages. The performance of multilingual approaches have been observed in experiments to rely on how closely the linguistic structure for the given corpora of the language is related to that of the languages for the supervised models. In addition, the given corpora may be multilingual with code-switching, which makes robustness across languages even more important. This is also important for models endangered languages like the 26 Formosan languages[54, 55]. For such endangered languages, the supervised models from which we can borrow related linguistic structures may not exist.

Vii Subword-like Tokens Evaluation

We use the corpus and evaluation defined in the Zero Resource Speech Challenge 2015 [43] to evaluate quality of the sub-word like tokens obtained in this work. We choose Track 1 of the Challenge to evaluate the quality of the sub-word structures on two languages, English and Xitsonga. The Track 1 evaluation was based on the ABX discriminability test [56] including across-speaker and within-speaker tests. The warping distance obtained by performing DTW over the sequences of the obtained frame-level features for predefined signal pairs were used as the distance metric for the ABX discriminability test. For the test here, we use posteriorgrams with dimensionality ( is the number of distinct subword-like tokens as used above) extracted from the decoded token lattices as the features to be evaluated in this experiment. There is no separate query set, only document tokens were considered. Because the durations of the predefined signal pairs for the test were short and designed to evaluate frame-level speech features, the subword-tokens extracted from the paradigms can be as long as or longer than the the entire duration of the signal. Since the scenario is quite different from the original design of the challenge, the metrics of the challenge is used to compare the different granularities and the paradigms rather than taken as a quality measure. The results in error percentage (the lower the better) on English and Xitsonga is listed in Table IV and Table V respectively. Table VI, also lists the performance of other systems as references.

In Table IV and Table V, we trained acoustic tokens using the Multi-granular Paradigm and the Hierarchical Paradigm on the two spoken archives in English and Xitsonga respectively. The results on both languages have similar trends and some observations can be made. (a) The shorter the HMM, the better the performance, probably because shorter HMMs can better fit the short-time variation of the signals for the evaluation intervals defined by the task. (b) Most results under the Hierarchical Paradigm were in general better than those under the Multi-granular Paradigm, which is the opposite of observation (d) of Section VI. This is probably because unlike MediaEval QUESST 2015, the Zero Resource Challenge 2015 used clean speech and did not mix multiple languages. With less noise, the Hierarchical Paradigm can better capture longer word-like tokens, leading to better subword-like tokens. For MediaEval QUESST 2015, the noise and different language structures made it difficult to build word-like tokens from subword-like tokens in the Hierarchical Paradigm.

In Table VI, we compare the best system at granularity (3,50) for both the Multi-granular Paradigm and Hierarchical Paradigm with the performance of other systems reported by the Challenge. The official baseline provided by the Challenge was the MFCC features without delta and double delta and the official topline was supervised phone posteriorgrams. The system proposed by Thiollière et al. [57] had two components: a dynamic-time warping (DTW) based spoken term discovery (STD) system and a Siamese DNN. The STD system clustered word-sized repeated fragments in the acoustic streams while the DNN was trained to minimize the distance between time aligned frames of tokens of the same cluster, and maximize the distance between tokens of different clusters. The frame-level features were then extracted from the bottleneck layer of the trained DNN. Renshaw et al. [14] proposed a similar system using correspondence autoencoders (cAE). The cAE was an autoencoder trained on feature pairs, one feature as the input and the other as the reconstruction target at the output. The frame-level features were then extracted from the bottleneck layer of the trained cAE. Like the hybrid Siamese system above [57], a DTW based system was used to align the feature pairs for feature sequences within the same cluster. The clusters can either be ground truth word types or discovered clusters. The performance was better when the clusters were the ground truth word types, although in that case the cAE/hybrid Siamese system was not an unsupervised model. Badino et al. [58] proposed the generation of discrete features by forcing the bottleneck features of autoencoders (AE) to be binary. The binary bottleneck features with dimensionality extracted from the AE could be interpreted as an integer between and . The discrete integer sequence was further refined with HMMs. Chen et al. [26] used a Dirichlet process Gaussian mixture model (DPGMM) to represent speech frames with Gaussian posteriorgrams. The model performed unsupervised clustering on untranscribed data, and each Gaussian component could be considered as a cluster of sounds from various speakers. The model inferred its model complexity (i.e. the number of Gaussian components) directly from the data. Baljekar et al. [59] used Articulatory Features (AF) trained on labeled speech in a higher resource language to infer phonological segments of varying granularity. Both the frame-level AFs and the token-like inferred phonological units were used in the evaluation. The results of our system are listed for reference.

Paradigm (m, n) (u, v) within across
Multi-granular (3, 50) - 25.52 16.73
(3, 100) - 27.64 17.86
(5, 50) - 26.36 17.14
(5, 100) - 27.98 17.78
(7, 50) - 27.47 18.25
(7, 100) - 29.32 18.82
Hierarchical (3, 50) (6, 698) 25.29 16.50
(3, 100) (6, 2036) 27.40 17.63
(5, 50) (10, 1001) 26.36 17.14
(5, 100) (10, 2336) 27.98 17.78
(7, 50) (14, 1176) 27.41 18.18
(7, 100) (14, 2248) 29.32 18.82
TABLE IV: ABX performance of Multi-granular Tokens and Hierarchical Tokens at different granularities trained on the English Corpus of the Zero Resource Speech Challenge 2015
Paradigm (m, n) (u, v) within across
Multi-granular (3, 50) - 23.92 14.75
(3, 100) - 25.77 15.34
(5, 50) - 25.16 16.20
(5, 100) - 27.38 16.65
(7, 50) - 26.52 17.45
(7, 100) - 27.89 17.93
Hierarchical (3, 50) (6, 549) 23.98 14.67
(3, 100) (6, 1185) 25.88 15.24
(5, 50) (10, 701) 25.14 16.31
(5, 100) (10, 1213) 27.11 16.81
(7, 50) (14, 705) 26.51 17.47
(7, 100) (14, 1012) 27.82 17.87
TABLE V: ABX performance of Multi-granular Tokens and Hierarchical Tokens at different granularities trained on the Xitsonga Corpus of the Zero Resource Speech Challenge 2015
Method English Xitsonga
across within across within
Topline 16.0 12.1 4.5 3.5
Baseline 28.1 15.6 33.8 19.1
Thiollière et al. [57] 17.9 12.0 16.6 11.7
Renshaw et al. [14] 21.1 13.5 19.3 11.9
Badino et al. [58] 26.3 17.3 23.6 14.1
Chen et al. [26] 16.3 10.8 17.2 9.6
Baljekar et al. [59] 29.8 18.4 29.7 18.1
Proposed (3,50) Mult. 25.5 16.7 24.0 14.8
Proposed (3,50) Hier. 25.3 16.5 24.0 14.7
TABLE VI: ABX performance of systems submitted by participants of the Zero Resource Speech Challenge 2015

Viii Choosing between the two Paradigms

The goal of the experiments above is to provide a side-by-side comparison of the two paradigms on the same tasks, but the true strength of the idea of having two paradigms lies in choosing which to use for a given task. For a given task, usually one paradigm would be preferred over the other and they would seldom be used together. The experimental results show that the Hierarchical Paradigm can achieve the best performance at the correct granularities, since linguistic structures provide context for the acoustic tokens. However, finding the correct granularities usually involves a grid search over the hyperparameter space which could be done more easily by training acoustic tokens under the Multi-granular Paradigm. With the acoustic subword-like tokens alone, the Multi-granular Paradigm can achieve decent performance by simply aggregating the scores of multiple token sets.

If a task has constraints on computation power so decoding with a large lexicon under the Hierarchical Paradigm becomes difficult, or if the task has to be robust across various acoustic conditions, we can simply take the average scores from the multiple token sets of the Multi-granular Paradigm and ignore the hierarchical structures. For example, we have shown that supervised speaker-independent DNNs adapted with unsupervised speaker-dependent Multi-granular tokens as auxiliary targets can be used for speaker adaptation [7]. By training Multi-granular tokens directly on the audio of a specific user, the system could capture speaker-specific acoustic tokens that could be due to dialect. The high degree of similarity between the HMMs of the multiple sets of unsupervised tokens under the Multi-granular Paradigm and the supervised speaker independent phoneme models make it possible for them to learn from each other through the shared layers of the DNN. The proposed semi-supervised approach has beaten strong adaptation baselines.

If a task has constraints on storage space making it difficult to store multiple representations using the Multi-granular Paradigm, or if the task requires only one high quality representation for every audio file, we can select a few promising granularities to train the Hierarchical Paradigm and discard the rest. For example, we have shown that the quality of the hierarchical tokens can be good enough for query expansion in semantic retrieval of spoken content [24, 29]. A text-based query expansion retrieval system returns documents containing exact matches in the first pass of the system for a given query. Words that appear frequently in the retrieved documents are considered to be semantically related, and treated as expanded queries. In the second pass, the system also returns documents containing expanded queries. Using the Hierarchical Paradigm, this technique can be applied to spoken documents as well by treating the acoustic tokens as regular words. Many Out-of-Vocabulary (OOV) words incorrectly recognized by ASR systems can be consistently represented by acoustic tokens. For an unannotated spoken corpus, the user can say “President”, and the system would return spoken documents containing “Roosevelt” without any knowledge of the content.

Table VII is a summary of the computation complexity for the Multi-granular Paradigm and Hierarchical Paradigm based on the notations explained in Table VIII. Only the decoding step in Eq. (2) is different for the two Paradigms. The explanations of the content of Table VII is in Appendix A. These Tables can be used as a reference for estimating the resources required and deciding which of the two paradigms to use.

Ix Conclusion

This paper presents two different paradigms for unsupervised discovery of structured acoustic tokens from a given spoken corpus, in which the acoustic tokens discovered are structured in two different ways. In the Multi-granular Paradigm, we are able to discover many sets of acoustic tokens over a two-dimensional space of temporal granularity and phonetic granularity, and these set of tokens can be complementary to each other. In the Hierarchical Paradigm, the two-level word-like and subword-like tokens are learned layer after layer with the proposed cascaded stages of iterative optimization. We then unify the two paradigms in a single theoretical framework, and discuss when it would be better to choose one over the other. We performed Spoken Term Detection experiments on the MediaEval QUESST 2015 corpus and ABX evaluation on the Zero Resource Challenge 2015 corpus to verify the competitiveness of the discovered acoustic tokens.

Operation Equation/Notation Time Storage
Token Training Eq. (1)
Token Acoustic Model
Token Lexical Model
Multi-granular Token Decoding Eq. (2)
Hierarchical Token Decoding Eq. (2)
Token Sequence
Token Distance Eq. (11)
Token DTW DTW for
Feature Sequence
Feature DTW DTW for
Storage Compression
Time Compression DTW for / DTW for
TABLE VII: Summary of computation complexity for Multi-granular tokens with granularity parameters and Hierarchical tokens with lexical parameters using the notations in Table VIII.
Notation Definition
decoded token sequences at the th iteration
the frame-level acoustic feature sequence
general parameters of the model
the HMM parameters of HMMs, each being states
the lexicon containing sequences of HMMs, with the longest being states long
the parameter set
the number of states
the number of HMMs
the number of states in the longest word in the lexicon
the number of words in the lexicon
the number of Gaussians in each state
the rank of the feature
the average duration of each state
the average duration of each utterance
the average duration of each utterance in the document corpus
the average duration of each utterance in the query corpus
the number of utterances
the number of utterances in the document corpus
the number of utterances in the query corpus
TABLE VIII: Summary of notations used in Table VII.

References

  • [1] Mark JF Gales, Kate M Knill, Anton Ragni, and Shakti P Rath, “Speech recognition and keyword spotting for low-resource languages: Babel project research at cued.,” in SLTU, 2014, pp. 16–23.
  • [2] Chun-An Chan and Lin-Shan Lee, “Unsupervised hidden markov modeling of spoken queries for spoken term detection without speech recognition.,” in INTERSPEECH, 2011, pp. 2141–2144.
  • [3] Marijn Huijbregts, Mitchell McLaren, and David van Leeuwen, “Unsupervised acoustic sub-word unit detection for query-by-example spoken term detection,” in Acoustics, Speech And Signal Processing (ICASSP), 2011 IEEE International Conference On. IEEE, 2011, pp. 4436–4439.
  • [4] Man-hung Siu, Herbert Gish, Arthur Chan, William Belfield, and Steve Lowe, “Unsupervised training of an HMM-based self-organizing unit recognizer with applications to topic classification and keyword discovery,” Computer Speech & Language, vol. 28, no. 1, pp. 210–223, 2014.
  • [5] Chia-ying Lee and James Glass, “A nonparametric bayesian approach to acoustic model discovery,” in Proceedings Of The 50th Annual Meeting Of The Association For Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012, pp. 40–49.
  • [6] Scott Novotney, Richard Schwartz, and Jeff Ma, “Unsupervised acoustic and language model training with small amounts of labelled data,” in Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on. IEEE, 2009, pp. 4297–4300.
  • [7] Cheng-Kuan Wei, Cheng-Tao Chung, Hung-Yi Lee, and Lin-Shan Lee, “Personalized acoustic modeling by weakly supervised multi-task deep learning using acoustic tokens discovered from unlabeled data,” submitted for a future conference.
  • [8] Herman Kamper, Aren Jansen, and Sharon Goldwater, “A segmental framework for fully-unsupervised large-vocabulary speech recognition,” Computer Speech & Language, 2017.
  • [9] Igor Szöke, Miroslav Skácel, Lukáš Burget, and Jan Černockỳ, “Coping with channel mismatch in query-by-example-but quesst 2014,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 5838–5842.
  • [10] Cheung-Chi Leung, Lei Wang, Haihua Xu, Jingyong Hou, Van Tung Pham, Hang Lv, Lei Xie, Xiong Xiao, Chongjia Ni, Bin Ma, et al., “Toward high-performance language-independent query-by-example spoken term detection for mediaeval 2015: Post-evaluation analysis,” in Proc. INTERSPEECH, 2016.
  • [11] Hongjie Chen, Cheung-Chi Leung, Lei Xie, Bin Ma, and Haizhou Li, “Unsupervised bottleneck features for low-resource query-by-example spoken term detection,” in Proc. INTERSPEECH, 2016.
  • [12] Peng Yang, Cheung-Chi Leung, Lei Xie, Bin Ma, and Haizhou Li, “Intrinsic spectral analysis based on temporal context features for query-by-example spoken term detection.,” in INTERSPEECH, 2014, pp. 1722–1726.
  • [13] Haipeng Wang, Tan Lee, Cheung-Chi Leung, Bin Ma, and Haizhou Li, “Acoustic segment modeling with spectral clustering methods,” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 23, no. 2, pp. 264–277, 2015.
  • [14] Daniel Renshaw, Herman Kamper, Aren Jansen, and Sharon Goldwater, “A Comparison of Neural Network Methods for Unsupervised Representation Learning on the Zero Resource Speech Challenge,” in Proceedings of Interspeech, 2015.
  • [15] Yaodong Zhang, Unsupervised Speech Processing with Applications to query-by-example Spoken Term Detection, Ph.D. thesis, Massachusetts Institute of Technology, 2013.
  • [16] Zhen Huang, Jinyu Li, Sabato Marco Siniscalchi, I-Fan Chen, Ji Wu, and Chin-Hui Lee, “Rapid adaptation for deep neural networks through multi-task learning.,” in Interspeech, 2015, pp. 3625–3629.
  • [17] Alex S Park and James R Glass, “Unsupervised pattern discovery in speech,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 16, no. 1, pp. 186–197, 2008.
  • [18] Aren Jansen and Benjamin Van Durme, “Indexing raw acoustic features for scalable zero resource search.,” in INTERSPEECH, 2012.
  • [19] Aren Jansen and Kenneth Church, “Towards unsupervised training of speaker independent acoustic models.,” in INTERSPEECH, 2011, pp. 1693–1692.
  • [20] Herbert Gish, Man-hung Siu, Arthur Chan, and William Belfield, “Unsupervised training of an hmm-based speech recognizer for topic classification.,” in INTERSPEECH, 2009, pp. 1935–1938.
  • [21] Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Unsupervised discovery of linguistic structure including two-level acoustic patterns using three cascaded stages of iterative optimization,” in Acoustics, Speech And Signal Processing (ICASSP), 2013 IEEE International Conference On. IEEE, 2013, pp. 8081–8085.
  • [22] Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Unsupervised spoken term detection with spoken queries by multi-level acoustic patterns with varying model granularity,” in Acoustics, Speech And Signal Processing (ICASSP), 2014 IEEE International Conference On. IEEE, 2014.
  • [23] Cheng-Tao Chung, Wei-Ning Hsu, Cheng-Yi Lee, and Lin-Shan Lee, “Enhancing automatically discovered multi-level acoustic patternsconsidering context Consistency with applications in spoken term detection,” in Acoustics, Speech And Signal Processing (ICASSP), 2015 IEEE International Conference On. IEEE, 2015.
  • [24] Yun-Chiao Li, Hung-yi Lee, Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Towards unsupervised semantic retrieval of spoken content with query expansion based on automatically discovered acoustic patterns,” in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, 2013, pp. 198–203.
  • [25] Yaodong Zhang and James R Glass, “Towards multi-speaker unsupervised speech pattern discovery,” in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010, pp. 4366–4369.
  • [26] Hongjie Chen, Cheung-Chi Leung, Lei Xie, Bin Ma, and Haizhou Li, “Parallel Inference of Dirichlet Process Gaussian Mixture Models for Unsupervised Acoustic Modeling: A feasibility study,” in Proceedings of Interspeech, 2015.
  • [27] Herman Kamper, Micha Elsner, Aren Jansen, and Sharon Goldwater, “Unsupervised neural network based feature extraction using weak top-down constraints,” .
  • [28] Cheng-Tao Chung, Cheng-Yu Tsai, Hsiang-Hung Lu, Chia-Hsiang Liu, Hung-yi Lee, and Lin-shan Lee, “An iterative deep learning framework for unsupervised discovery of speech features and linguistic units with applications on spoken term detection,” in Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on. IEEE, 2015, pp. 245–251.
  • [29] Hung-yi Lee, Yun-Chiao Li, Cheng-Tao Chung, and Lin-shan Lee, “Enhancing query expansion for semantic retrieval of spoken content with automatically discovered acoustic patterns,” in Acoustics, Speech And Signal Processing (ICASSP), 2013 IEEE International Conference On. IEEE, 2013, pp. 8297–8301.
  • [30] David RH Miller, Michael Kleber, Chia-Lin Kao, Owen Kimball, Thomas Colthurst, Stephen A Lowe, Richard M Schwartz, and Herbert Gish, “Rapid and accurate spoken term detection.,” in INTERSPEECH, 2007, pp. 314–317.
  • [31] Jonathan Mamou, Bhuvana Ramabhadran, and Olivier Siohan, “Vocabulary independent spoken term detection,” in Proceedings Of The 30th Annual International ACM SIGIR Conference On Research And Development In Information Retrieval. ACM, 2007, pp. 615–622.
  • [32] Roy G Wallace, Robert J Vogt, and Sridha Sridharan, “A phonetic search approach to the 2006 nist spoken term detection evaluation,” 2007.
  • [33] Yi-Cheng Pan and Lin-shan Lee, “Performance analysis for lattice-based speech indexing approaches using words and subword units,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 18, no. 6, pp. 1562–1574, 2010.
  • [34] Murat Saraclar and Richard Sproat, “Lattice-based search for spoken utterance retrieval,” Urbana, vol. 51, pp. 61801, 2004.
  • [35] Lou Boves, Rolf Carlson, Erhard W Hinrichs, David House, Steven Krauwer, Lothar Lemnitzer, Martti Vainio, and Peter Wittenburg, “Resources for speech research: Present and future infrastructure needs.,” in INTERSPEECH. Citeseer, 2009, pp. 1803–1806.
  • [36] Arun Kumar, Nitendra Rajput, Dipanjan Chakraborty, Sheetal K Agarwal, and Amit A Nanavati, “Wwtw: The world wide telecom web,” in Proceedings Of The 2007 Workshop On Networked Systems For Developing Regions. ACM, 2007, p. 7.
  • [37] Michael A Carlin, Samuel Thomas, Aren Jansen, and Hynek Hermansky, “Rapid evaluation of speech representations for spoken term discovery.,” in INTERSPEECH, 2011, pp. 821–824.
  • [38] Yaodong Zhang and James R Glass, “Unsupervised spoken keyword spotting via segmental dtw on gaussian posteriorgrams,” in Automatic Speech Recognition & Understanding, 2009. ASRU 2009. IEEE Workshop On. IEEE, 2009, pp. 398–403.
  • [39] Haipeng Wang, Cheung-Chi Leung, Tan Lee, Bin Ma, and Haizhou Li, “An acoustic segment modeling approach to query-by-example spoken term detection,” in Acoustics, Speech And Signal Processing (ICASSP), 2012 IEEE International Conference On. IEEE, 2012, pp. 5157–5160.
  • [40] Yaodong Zhang and James R Glass, “A piecewise aggregate approximation lower-bound estimate for posteriorgram-based dynamic time warping.,” in INTERSPEECH, 2011, pp. 1909–1912.
  • [41] Yaodong Zhang, Kiarash Adl, and James Glass, “Fast spoken query detection using lower-bound dynamic time warping on graphical processing units,” in Acoustics, Speech And Signal Processing (ICASSP), 2012 IEEE International Conference On. IEEE, 2012, pp. 5173–5176.
  • [42] Igor Szöke, Luis Javier Rodríguez-Fuentes, Andi Buzo, Xavier Anguera, Florian Metze, Jorge Proenca, Martin Lojka, and Xiao Xiong, “Query by example search on speech at mediaeval 2015.,” in MediaEval, 2015.
  • [43] Maarten Versteegh, Roland Thiolliere, Thomas Schatz, Xuan Nga Cao, Xavier Anguera, Aren Jansen, and Emmanuel Dupoux, “The zero resource speech challenge 2015,” in Proc. of INTERSPEECH, 2015.
  • [44] Thian-Huat Ong and Hsinchun Chen, “Updateable PAT-Tree approach to chinese key phrase extraction using mutual information: A linguistic foundation for knowledge management,” 1999.
  • [45] John R Hershey and Peder A Olsen, “Approximating the kullback leibler divergence between gaussian mixture models,” in Acoustics, Speech And Signal Processing, 2007. ICASSP 2007. IEEE International Conference On. IEEE, 2007, vol. 4, pp. IV–317.
  • [46] B-H Juang, “On the hidden markov model and dynamic time warping for speech recognition—a unified view,” Bell Labs Technical Journal, vol. 63, no. 7, pp. 1213–1243, 1984.
  • [47] Alexandru Caranica, Andi Buzo, Horia Cucu, and Corneliu Burileanu, “Speed@ mediaeval 2015: Multilingual phone recognition approach to query by example std.,” in MediaEval, 2015.
  • [48] Min Ma and Andrew Rosenberg, “Cuny systems for the query-by-example search on speech task at mediaeval 2015.,” in MediaEval, 2015.
  • [49] Miroslav Skácel and Igor Szöke, “But quesst 2015 system description.,” in MediaEval, 2015.
  • [50] Jingyong Hou, Cheung-Chi Leung Van Tung Pham, Cheung-Chi Leung, Lei Wang, Haihua Xu, Hang Lv, Lei Xie, Zhonghua Fu, Chongjia Ni, Xiong Xiao, et al., “The nni query-by-example system for mediaeval 2015.,” in MediaEval, 2015.
  • [51] Luis J Rodriguez-Fuentes and Mikel Penagarikano, “Mediaeval 2013 spoken web search task: system performance measures,” 2013.
  • [52] Xavier Anguera, Luis-J Rodriguez-Fuentes, Andi Buzo, Florian Metze, Igor Szöke, and Mikel Penagarikano, “Quesst2014: evaluating query-by-example speech search in a zero-resource setting with real-life queries,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 5833–5837.
  • [53] Finnian Kelly and Naomi Harte, “A comparison of auditory features for robust speech recognition,” in Signal Processing Conference, 2010 18th European. IEEE, 2010, pp. 1968–1972.
  • [54] James Fox et al., “Current developments in comparative austronesian studies,” 2004.
  • [55] Elizabeth Zeitoun and Ching-Hua Yu, “The formosan language archive: Linguistic analysis and language processing,” Computational Linguistics and Chinese Language Processing, vol. 10, no. 2, pp. 167–200, 2005.
  • [56] Thomas Schatz, Vijayaditya Peddinti, Francis Bach, Aren Jansen, Hynek Hermansky, and Emmanuel Dupoux, “Evaluating speech features with the minimal-pair ABX task: Analysis of the classical MFC/PLP pipeline,” in INTERSPEECH 2013: 14th Annual Conference Of The International Speech Communication Association, 2013, pp. 1–5.
  • [57] Roland Thiolliere, Ewan Dunbar, Gabriel Synnaeve, Maarten Versteegh, and Emmanuel Dupoux, “A Hybrid Dynamic Time Warping-deep Neural Network Architecture for Unsupervised Acoustic Modeling,” in Sixteenth Annual Conference of the International Speech Communication Association. Citeseer, 2015.
  • [58] Leonardo Badino, Alessio Mereta, and Lorenzo Rosasco, “Discovering Discrete Subword units with Binarized Autoencoders and Hidden-markov-model Encoders,” in Proceedings of Interspeech, 2015.
  • [59] Pallavi Baljekar, Sunayana Sitaram, Prasanna Kumar Muthukumar, and A Black, “Using Articulatory Features and Inferred Phonological Segments in Zero Resource Speech Processing,” in Proceedings of Interspeech, 2015.

Appendix A Time Complexity and Storage Requirement

We list time complexity and storage requirement for training the acoustic tokens and performing QbE-STD for a token set of granularity under the Multi-granular Paradigm and a lexicon with parameters under the Hierarchical Paradigm. We ignore the space complexity during the computation of the algorithms, but instead focus on the disk space required to store the results of the algorithms. For simplicity we assume every utterance in the spoken archive has the same length , and there are a total of utterances in the archive. We list the results in Table VII with the corresponding notations in Table VIII. Note that there is no distinction between the Multi-granular and Hierarchical Paradigm for most algorithms except for token decoding in Eq. (2).

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
12153
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description