Nonparametric Bayesian Double Articulation Analyzerfor Direct Language Acquisition from Continuous Speech Signals

Nonparametric Bayesian Double Articulation Analyzer
for Direct Language Acquisition from Continuous Speech Signals

Tadahiro Taniguchi, Shogo Nagasaka, Ryo Nakashima This research was partially supported by a Grant-in-Aid for Young Scientists (B) 2012-2014 (24700233) funded by the Ministry of Education, Culture, Sports, Science, and Technology, Japan.T. Taniguchi is with College of Information Science and Engineering, Ritsumeikan University, 1-1-1 Noji Higashi, Kusatsu, Shiga 525-8577, Japan taniguchi@em.ci.ritsumei.ac.jpS. Nagasaka and R. Nakashima are with the Graduate School of Information Science and Engineering, Ritsumeikan University, 1-1-1 Noji Higashi, Kusatsu, Shiga 525-8577, Japan { s.nagasaka, nakashima}@em.ci.ritsumei.ac.jp
Abstract

Human infants can discover words directly from unsegmented speech signals without any explicitly labeled data. The main problem of this paper is to develop a computational model that can estimate language and acoustic models, and discover words directly from continuous human speech signals in an unsupervised manner. For this purpose, we propose an integrative generative model that combines a language model and an acoustic model into a single generative model called the “hierarchical Dirichlet process hidden language model” (HDP-HLM). The HDP-HLM is obtained by extending the hierarchical Dirichlet process hidden semi-Markov model (HDP-HSMM) proposed by Johnson et al. An inference procedure for the HDP-HLM is derived using the blocked Gibbs sampler originally proposed for the HDP-HSMM. This procedure enables the simultaneous and direct inference of language and acoustic models from continuous speech signals. Based on the HDP-HLM and its inference procedure, we develop a novel machine learning method called nonparametric Bayesian double articulation analyzer (NPB-DAA) that can directly acquire language and acoustic models from observed continuous speech signals. By assuming HDP-HLM as a generative model of observed time series data, and by inferring latent variables of the model, the method can analyze latent double articulation structure, i.e., hierarchically organized latent words and phonemes, of the data in an unsupervised manner. We also carried out two evaluation experiments using synthetic data and actual human continuous speech signals representing Japanese vowel sequences. In the word acquisition and phoneme categorization tasks, the NPB-DAA outperformed a conventional double articulation analyzer (DAA) and baseline automatic speech recognition system whose acoustic model was trained in a supervised manner. The main contributions of this paper are as follows: (1) We develop a probabilistic generative model that integrates language and acoustic models, i.e., HDP-HLM. (2) We derive an inference method for this, and propose the NPB-DAA. (3) We show that the NPB-DAA can discover words directly from continuous human speech signals in an unsupervised manner.

Language acquisition, child development, Bayesian nonparametrics, latent variable model

I Introduction

INFANTS must solve the word segmentation problem in order to acquire language from continuous speech signals to which they are exposed. The word segmentation problem is that of identifying word boundaries in continuous speech. If the speech signals are given to infants as isolated words, the task is easy for them. However, it has been known that a relatively small number of infant-directed utterances consist of an isolated word [1]. If infants had knowledge about words and phonemes innately, the problem could be solved relatively easily. On the contrary, the fact that each language has different lists of phonemes and words clearly shows that infants have to acquire them through developmental processes.

From the viewpoint of statistical learning, the learning problem, i.e., direct language acquisition from continuous speech signals, is very difficult because infants do not have access to the truth labels of speech recognition results. In other words, the language acquisition process must be completely unsupervised.The main problem of this paper is to develop a computational model that can estimate language and acoustic models, and discover words directly from continuous human speech signals.

Most modern automatic speech recognition (ASR) systems have a language model that represents knowledge about words and their distributional probabilities as well as an acoustic model that represents knowledge about phonemes and their acoustic features, e.g., [Kawahara2000, Dahl2012]. Both are usually trained using large transcribed speech datasets and linguistic corpora through supervised learning. However, infants do not have access to such explicitly labeled datasets. They have to acquire both language and acoustic models from raw acoustic speech signals in an unsupervised manner.

The question about what kind of cues human infants utilize to discover words from continuous speech signals arises. Saffran et al. listed three types of cues for word segmentation: 1) prosodic, 2) distributional, and 3) co-occurrence [2]. 1) Prosodic cues rely on acoustic information, such as post-utterance pauses, stressed syllables, and acoustically distinctive final syllables. 2) Distributional cues represent the statistical relationships between pairs of neighboring speech sounds. 3) Co-occurrence cues are used by children to learn words by detecting sounds that co-occur with certain entities in the environment. Although many researchers had considered the distributional cues to be too complex for infants to use, Saffran reported that word segmentation from fluent speech can be accomplished by 8-month-old infants based on solely on distributional cues [3]. It is also reported that the distributional cues seem to be used by infants by the age of 7 months, which is earlier than most other cues [4]. These results imply that infants have a fundamental mechanism that can estimate word segments using distributional cues. In addition to this fundamental segmentation mechanism using distributional cues, the prosodic and co-occurrence cues are believed to help the word segmentation task only as supplemental cues [2]. From the viewpoint of phonemic category acquisition, distributional patterns of sounds have been considered to provide infants with clues about the phonemic structure of a language as well [5].

Based on these findings, in this paper, we focus on distributional cues. We explore the fundamental computational mechanism that can discover words from speech signals using only distributional cues, and develop an unsupervised machine learning method which can discover phonemes and words directly from unsegmented speech signals

In this paper, we propose an unsupervised learning method called the nonparametric Bayesian double articulation analyzer (NPB-DAA) that can automatically estimate double articulation structures, i.e., hierarchically organized latent words and phonemes, embedded in speech signals. We propose this as a computationally valid explanation for the simultaneous acquisition of language and acoustic models. To develop the NPB-DAA, we introduce a probabilistic generative model called the hierarchical Dirichlet process hidden language model (HDP-HLM) as well as its inference algorithm.

The remainder of this paper is organized as follows. Section II describes the background of the proposed method. Section III presents the HDP-HLM by extending hierarchical Dirichlet process-hidden semi-Markov model (HDP-HSMM) proposed by Johnson et al. [6]. The HDP-HLM is an probabilistic generative model that integrates acoustic and language models for continuous speech signals. Section IV describes the inference procedure of HDP-HLM, and our proposed NPB-DAA. Sections V and VI evaluate the effectiveness of the proposed method using synthetic data and actual sequential vowel speech signals. Section VII concludes this paper.

Ii Background

Ii-a Word segmentation using distributional cues in transcribed data

With respect to statistical computational models, many kinds of unsupervised machine learning methods for word segmentation have been proposed in the last two decades [7, 8, 9, 10, 11, 12, 13, 14, 15]. Brent [7] proposed model-based dynamic programming 1 (MBDP-1) for recovering deleted word boundaries in a natural-language text. The MBDP-1 presumes that there is an information source generating the text explicitly and segments the target text so as to maximize the text’s probability. Venkataraman [8] proposed a statistical model for segmentation and word discovery from phoneme sequences by improving Brent’s algorithm.

Recently, Bayesian nonparametrics, including the hierarchical Dirichlet process and hierarchical Pitman-Yor process, have enabled more sophisticated methods for word segmentation. These models have fully Bayesian generative models and make it possible to calculate the appropriately smoothed n-gram probability for a word that has a long context. Theoretically, they can treat an infinite number of possible words. Goldwater [9, 10] proposed an HDP-based word segmentation method and showed that taking context into account is important for statistical word segmentation. Mochihashi et al. [11] proposed a nested Pitman-Yor language model (NPYLM), in which a letter n-gram model based on a hierarchical Pitman-Yor language model is embedded in the word n-gram model. They also developed the forward filtering backward sampling procedure to achieve efficient blocked Gibbs sampling and hence infer word boundaries.

However, all of the above mentioned word segmentation methods presume that transcribed phoneme sequences or text data without any recognition errors can be obtained by the learning system. In practice, before acquiring a language model containing an inventory of words, a learning system, i.e., an infant, has to recognize speech signals without any knowledge of words, only with the knowledge of phonemes and/or syllables in an acoustic model. In such a recognition task, the phoneme recognition error rate inevitably becomes high. To overcome this problem, several researchers have proposed word discovery methods utilizing co-occurrence cues.

Ii-B Lexical acquisition using co-occurrence cues

Roy et al. [16] ambitiously implemented a computational model that enables a robot to autonomously discover words from raw multimodal sensory input. Their results were imperfect compared with recent state-of-art results. However, their results showed it was possible to develop cognitive models that can process raw sensor data and acquire a lexicon without the need for human transcription or labeling.

Iwahashi et al. [17] implemented an interactive learning method for a robot to acquire spoken words through human-robot interaction using audio-visual interfaces. Their learning process was carried out on-line, incrementally, actively, and in an unsupervised manner. Iwahashi et al. [18] also proposed a method that enables a robot to learn linguistic knowledge through human-robot communication in an unsupervised manner. The model combines speech, visual, and behavioral information in a probabilistic framework. Though its performance was still limited, the model is considered to be a more sophisticated model than that proposed in Roy et al.’s previous study [16] from the viewpoint of statistical machine learning. On the basis of this work, Iwahashi et al. [19] developed an integrated online machine learning system combining speech, visual, and tactile information obtained through interaction. It enabled robots to learn beliefs regarding speech units, words, the concepts of objects, motions, grammar, and pragmatic and communicative capabilities. They called the system LCore.

Araki et al. [20] built a robot that formed object categories and acquired their names by combining a multimodal latent Dirichlet allocation (MLDA) and the NPYLM. They showed that the iterative learning of MLDA and NPYLM increases word segmentation performance by using distributional cues and co-occurrence cues simultaneously, but they reported that the prediction accuracy decreases as the phoneme recognition error rate increases. To overcome this problem, Nakamura et al. integrated statistical models for word segmentation and multimodal categorization. They showed that a robot can autonomously form object categories and related words from continuous speech signals and continuous visual, auditory, and haptic information by updating its language and categorization models iteratively [21].

Not only object information, but also place information can be used as co-occurrence cues. Taguchi et al. [22] proposed a method for the unsupervised learning of place-names from information pairs that consist of spoken utterances and the mobile robot’s estimated current location without any prior linguistic knowledge other than a phoneme acoustic model. They optimized a word list using a model selection method based on description length criterion.

Ii-C Word segmentation using distributional cues in noisy input

As described above, it becomes clear that using co-occurrence cues can mitigate the ill effects of phoneme recognition errors in a word discovery task. However, whether or not the word discovery task can be achieved solely from raw speech signals is still an open question. Neubig et al. [24] extended the unsupervised morphological analyzer proposed by Mochihashi et al. and enabled it to analyze phoneme lattices. Heymann et al. [25] modified Neubig et al.’s algorithm and proposed a suboptimal two-stage algorithm. Heymann et al. reported that their proposed method outperformed the original method in an experiment that used lattice input generated artificially from text input. In addition, they used the discovered language model for phoneme recognition in an iterative manner and reported that recognition performance was improved [26]. Elsner et al. [27] proposed a computational model that jointly performs word segmentation and learns an explicit model of phonetic variation. However, they did not start with acoustic sound, but with dictated noisy text, i.e., recognized phoneme sequences with errors. Their model does not include acoustic model learning.

Fig. 1: Overview of unsupervised learning of language and acoustic models through human-robot interaction, and the generative process of speech signal assumed in the DAA

They showed that the ill effect of phoneme recognition errors can be mitigated to some extent by using distributional information more appropriately. However, all of these methods, except for Iwahashi et al., used an acoustic model previously trained in a supervised manner. Therefore, these models are insufficient as a constructive model for language acquisition from raw speech signals. Hence, the unsupervised learning of an acoustic model is also an important problem.

Ii-D Unsupervised learning of an acoustic model

In contrast with the word segmentation task, the acquisition of an acoustic model is basically a categorization task of the feature vectors transformed from continuous speech signals. Mixture models, including hidden Markov models (HMMs) and Gaussian mixture models, have been used to model phoneme category acquisition. For example, Lake et al. [28] used an online mixture estimation model for vowel category learning. The model was originally proposed by Vallabha et al. [Vallabha2007]. However, the phoneme acquisition has proven to be complex categorization task in a feature space. The distribution of the feature vectors of each phoneme overlap with each other, and the actual sound of the phoneme depends on its context. Feldman et al. [29] pointed out that feedback information from segmented words is important for phonetic category acquisition. They demonstrated this effect through simulations using Bayesian models.

Lee et al. [30] proposed a hierarchical Bayesian model that can discover a proper set of sub-word units and an acoustic model in an unsupervised manner. However, their model did not estimate the language model. Lee et al. [31] also proposed a hierarchical Bayesian model simultaneously discovering the phonetic inventory and the Letter-to-Sound mapping rules on the basis of transcribed data only. The method is not a completely unsupervised learning method from raw speech signals, but does automatically determine relations between sounds and transcribed alphabets and forms an acoustic model in an unsupervised manner.

There have been several studies about the simultaneous unsupervised learning of acoustic and language models. However, a very small number of statistical learning methods that can simultaneously acquire integrated acoustic and language models have been proposed. Brandl et al. [32] attempted to develop an unsupervised learning method that enables a robot to simultaneously obtain phonemes, syllables, and words from acoustic speech. They did not successfully build such a system, but reported their preliminary results. Walter et al. [33] proposed a word discovery method that uses an HMM-based method for finding acoustic unit descriptors in parallel with a dynamic time warping technique for finding word segments. However, their model is still heuristic from the viewpoint of probabilistic computational models. As Feldman et al. pointed out, word segmentation and phonetic category acquisition are undoubtedly mutually dependent. Therefore, a theoretically integrated probabilistic generative model for the simultaneous acquisition of language and acoustic models is desirable. Very recently, Kamper et al. [Kamper2015] and Lee et al. [Lee2015]proposed probabilistic computational models that achieved unsupervised direct word discovery from continuous speech signals. However, they did not provide an explicit, integrated probabilistic generative model for unsupervised simultaneous learning of language and acoustic models. To develop such an integrated theoretical model, the authors introduced the general concept of double articulation analysis.

Ii-E Double articulation analysis

From a general point of view, unsupervised word discovery from raw speech signals is regarded as a double articulation analysis of the time series data representing a speech signal. The double articulation structure is a well-known two-layer hierarchical structure, i.e., a word sequence is generated from a language model, a word is a sequence of phonemes, and each phoneme outputs observation data during the period it persists. The word discovery problem becomes a general problem about analyzing the time series data that potentially have a double articulation structure by estimating the latent acoustic model as well as the latent language model.

Taniguchi et al. [34] proposed a double articulation analyzer (DAA) by combining the sticky HDP-HMM and the NPYLM. The sticky HDP-HMM proposed by Fox et al. is an nonparametric Bayesian extension of HMM [35]. They applied the DAA to human motion data to extract unit motion from unsegmented human motion data. However, they simply used the two nonparametric Bayesian methods sequentially. They did not integrate the two models into a single generative model. Therefore, if there are many recognition or categorization errors in the result of the first latent letter recognition process, i.e., segmentation process by the sticky HDP-HMM, the performance of the subsequent process, i.e., unsupervised chunking by the NPYLM, deteriorates. In the terminology of a DAA, a latent letter and a latent word basically correspond to a phoneme and a word in speech signals, respectively. In this paper, we call this method “conventional DAA” in order to differentiate it from the DAA newly proposed in this paper, i.e., NPB-DAA. Conventional DAA has been successfully applied to human motion data and driving behavior data, which were also considered to potentially have a double articulation structure. Conventional DAA has been used for various purposes, e.g., segmentation [36], prediction [37, 38], data mining [39], topic modeling [40, 41], and video summarization [42]. Conventional DAA owes its successful result with respect to driving behavior data to the fact that driving behavior data were continuous and smooth compared with raw speech signals. For a driving letter, which corresponds to a phoneme in continuous speech signals, the recognition error rate was still low. However, it is expected that a straightforward application of the conventional DAA to raw speech signals will inevitably turn out badly.

Therefore, based on the background mentioned above, in this paper, we propose an integrated probabilistic generative model, HDP-HLM, representing a latent double articulation structure that contains both a language model and an acoustic model. By assuming HDP-HLM as a generative model of observed time series data, and by inferring latent variables of the model, we can analyze latent double articulation structure of the data in an unsupervised manner. A novel double articulation analyzer is developed on the basis of the HDP-HLM and its inference algorithm. This HDP-HLM-based double articulation analysis method is called NPB-DAA.

Iii Generative model

In this section, we propose a novel generative model, the HDP-HLM, for time series data that potentially has a double articulation structure, by extending HDP-HSMM [6]. As indicated in its name, HDP-HLM latently contains a language model. In contrast with the conventional case where a latent state transits to the next state on the basis of a Markov process in the HDP-HMM, a latent word in the HDP-HLM transits to the next latent word on the basis of a language model. An illustrative overview of the proposed method and the target task are shown in Fig. 1. We can naturally derive an inference procedure for the HDP-HLM based on the blocked Gibbs sampler. First, we briefly describe the HDP-HSMM. We then describe the HDP-HLM.

Iii-a Hdp-Hsmm

HDP-HSMM is a nonparametric Bayesian extension of the conventional hidden semi-Markov model (HSMM) [6, 43]. Unlike HDP-HMM, which is an nonparametric Bayesian extension of conventional hidden Markov model (HMM) [35, 44], the HDP-HSMM explicitly models the duration time of a hidden state. A graphical model of the HDP-HSMM is shown in Fig. 2.

Fig. 2: Model of the HDP-HSMM [6]

The generative process of the HDP-HSMM is described as follows.

(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)

where and represent the stick breaking process and Dirichlet process, respectively [45, 44]. The parameters and are hyperparameters of the , is a global transition probability that becomes the base measure of the transition probability distributions, and is a transition probability distribution related to the -th super state. Variable is the -th super state in the sequence of super states, is the frame duration of , and the variables and are a hidden state and an observation at time frame , respectively. Parameters of an emission distribution and a duration distribution for the -th super state are described as and . Additionally, and are base measures for emission distribution and duration distribution. The function and represent emission and duration distributions, respectively. The time frames and are frames corresponding to a start point and a end point of a segment corresponding to .

In contrast with the case where HMM assumes that a hidden state transits to the next hidden state according to a Markov process, the hidden semi-Markov Model (HSMM) assumes that a hidden super state transits to next hidden super state after a probabilistically determined duration time , which is sampled from a duration distribution The super state is sampled from a categorical distribution related to the previous super state . When the super state and duration time are sampled, a sequence of hidden states are determined to be .

An observation datum at time is assumed to be drawn from an emission distribution whose parameter is . Observation data are generated by for steps.

An efficient sampling inference procedure based on the backward filtering forward sampling technique was proposed for constructing a blocked Gibbs sampler [6]. A similar algorithm was proposed for HDP-HMM by Fox et al. [35]. The algorithm is derived from a weak-limit approximation of the number of hidden super states. The computational cost of the message passing algorithm can be reduced to , where is the length of the observed data, is the state cardinality, and is the maximal duration of a super state for truncation. The order is almost the same as that of the backward filtering forward sampling algorithm for the HDP-HMM, except for the constant factor .

Iii-B Hdp-Hlm

The generative model for time series data that potentially have a double articulation structure can be obtained by extending the HDP-HSMM. A graphical model of the proposed HDP-HLM is shown in Fig. 3.

Fig. 3: Model of the proposed HDP-HLM

In the generative model of HDP-HLM, the super state corresponds to a word in spoken language, which is the fundamental idea of the extension. The -th super state has a phoneme sequence , where is the length of the -th word . The generative process of the HDP-HLM is described as follows.

(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)

where is the base measure and and are hyperparameters of a word model, which generates words, i.e., latent letter sequences. Furthermore, outputs , representing the transition probability from latent letter to the next latent letter. By contrast, is the base measure, and are hyperparameters of the language model, and outputs , representing the transition probability from latent word to the next latent word. The superscripts and indicate language model (LM) or word model (WM), respectively. The latent letters contained in the -th latent word are sequentially sampled from . The -th latent letter of the -th latent word is represented by . The emission distribution and the duration distribution have parameters and for the -th latent letter, respectively. The base measures and generate and , respectively. Variable is the -th latent word in the sequence of latent words, and corresponds to the super state in HDP-HSMM, is the frame duration of , is the -th latent letter of the -th latent word, and is the frame duration of . The variable and are a hidden state and an observation at time frame , rspectively. The time frames and are frames corresponding to a start point and a end point of a segment corresponding to , respectively.

In contrast with HMMs, the duration distribution is explicitly determined for each latent letter in the HDP-HLM. The HDP-HLM inherits this property from the HDP-HSMM [6]. The duration time of latent letter , which is the -th latent letter of the -th latent word in a sampled word sequence, is drawn from the duration distribution , where is the duration parameter for latent letter . The duration of a latent word becomes . If we assume that is a Poisson distribution, the duration distribution of a latent word also follows a Poisson distribution. In this case, the Poisson parameter of the duration distribution becomes . This relation owes to the reproductive property of Poisson distributions.

In the HDP-HLM, latent word determines a latent letter sequence . Based on the determined sequence , duration of is drawn, and observations are drawn from an emission distribution corresponding to . The maps and represent the indices of words and letters, respectively, in a latent word sequence at time . Using this generative model, a continuous time series data with a latent double articulation structure can be generated. In this paper, we assume that observed time series data represents a feature vector of the speech signal at time and is generated in this way. Generally, the HDP-HLM can be applied to any kind of time series data that has a double articulation structure.

From the viewpoint of language acquisition, we review the generative model. In the conventional DAA [34], a DAA is composed of two separated machine learning methods, i.e., sticky HDP-HMM for encoding observation data to letter sequences and NPYLM for chunking letter sequences into word sequences. On the one hand, the transition probabilities and correspond to the word bigram and letter bigram models in the NPYLM, respectively. Therefore, contains information regarding a language model. On the other hand, contains information regarding an acoustic model, which corresponds to a sticky HDP-HMM in conventional DAA.

The HDP-HLM assumes that the language model consists of a word bigram model. Mochihashi et al. compared the bigram and trigram language models and showed that the trigram assumption hardly improved the word segmentation performance although computational cost and complexity increased [11]. Therefore, the bigram assumption must be appropriate for a word segmentation and word discovery task.

If we derive an efficient inference procedure for this two-layer hierarchical generative model, the inference procedure can infer the acoustic model and language model simultaneously.

Iv Inference algorithm

In this section, we derive an approximated blocked Gibbs sampler for the HDP-HLM. The sampler can simultaneously infer latent letters, latent words, a language model, and an acoustic model. Concurrently, the inference procedure can estimate the overall double articulation structure from continuous time series data. Therefore, we propose the unsupervised machine learning method NPB-DAA. The overall inference procedure is shown in Algorithm 1.

Iv-a Inference of latent words:

In the HDP-HSMM, a backward filtering forward sampling procedure is adopted instead of the direct assignment procedure. When each latent state strongly depends on other neighboring latent states, the direct assignment procedure, which is a naive implementation of the Gibbs sampler, results in a poor mixing rate [6]. Johnson et al. showed that a blocked Gibbs sampler using a backward filtering forward sampling procedure that can simultaneously sample all hidden states of an observed sequence outperforms a direct-assignment Gibbs sampler. By extending the backward filtering forward-sampling procedure and making it applicable to HDP-HLM, we can obtain an inference procedure for HDP-HLM.

The calculation of the backward messages for super states in HDP-HSMM is as follows.

(25)
(26)
(27)
(28)
(29)

where is a variable indicating that is the boundary of the super state. If , . The variable in (25) represents the probability that the latent super state and that it transitions into a different super state at the next time step. Probability is obtained by marginalizing over all super states at time step . Variable in (27) represents the probability that the latent super state becomes from time step . This probability can be obtained by marginalizing over the duration variable in (28). Probability in (28) shows the emission probability of observed data given the condition that the duration of is . In the HDP-HSMM, all time steps with the same super state share the same emission distribution. Therefore, the likelihood of a super state , i.e., , can be calculated easily.

Surprisingly, in HDP-HLM, the exact same procedure of calculating backward messages as that of HDP-HSMM can be used. We obtain a message passing algorithm for HDP-HLM by replacing a super state in HDP-HSMM with latent word in HDP-HLM. Only the likelihood of the latent word , i.e., , is different between the two message passing algorithms. The likelihood of the occurrence of latent word then becomes

(30)
(31)

where indicates the number of elements in vector , and is an -partition of duration . By substituting (30) into (28), we can obtain a formula to calculate the backward message of HDP-HLM.

The calculation of (30) looks complicated at first glance. However, the value of (30) can be efficiently calculated using dynamic programming. If we define forward message as the probability that the -th latent letter in the relevant latent word transits to the next latent letter at time after emitting observations, forward message can be recursively calculated as follows:

(32)
(33)

As a result, . By applying the calculation formula shown above, backward messages and can be calculated. Using the calculation procedure for backward messages, the forward sampling procedure proposed in the HDP-HSMM can be employed. The backward filtering forward sampling procedure enables the blocked Gibbs sampler to directly sample latent words from observation data without explicitly sampling latent letters in HDP-HLM.

In the forward sampling procedure, super state and its duration are sampled iteratively using backward messages as follows.

(34)
(35)

where . For further details, please refer to the original paper, in which the HDP-HSMM was introduced [6].

Iv-B Sampling a letter sequence for a latent word:

The sampled is only an index of a latent word. Concrete letter sequences for each latent word should be sampled according to the correspondence of each sub-sequence of time series data to each latent word. When a latent word is given, the generative model of the observation in the range of a latent word can be regarded as an HDP-HSMM whose super states correspond to latent letters. Therefore, in the proposed model, each sub-sequence of observation data corresponding to a latent word can be considered an observed sequence generated by an HDP-HSMM. If only a single sub-sequence of observations corresponds to a latent word, a latent letter sequence could be sampled using an ordinal sampling procedure in the HDP-HSMM. However, observations containing the same latent word have to share the same latent letter sequence . Therefore, latent letter sequences for observations with the same latent word are simultaneously sampled, given that they have the same latent letter sequence. We employ an approximate sampling procedure based on sampling importance resampling (SIR) [46].

If we define the observations sharing the same latent word as and the shared latent letter sequence as , the posterior probability becomes

(36)
(37)

where in (37), representing the likelihood of the observation, can be calculated using the backward filtering procedure in the HDP-HSMM. Probability can also be calculated in the same way as (30) if is given. The HDP-HSMM also provides a sampling procedure for . Therefore, if we consider as the proposed distribution and as a weight, the SIR procedure can be employed [46]. Specifically, after a set of are sampled from the proposed distribution , a final sample is drawn from the set with a probability proportional to each sample’s weight. Using this procedure, the proposed model can approximately sample a latent letter sequence for the -th latent word.

Iv-C Sampling model parameters

After sampling latent words for each observation data and sampling letter sequences for the latent words, other parameters can be updated. Parameters of the language model, i.e., and , can be updated on the basis of latent word sequences. Parameters of the word model, i.e., and , can be updated on the basis of sampled letter sequences for latent words. Parameters for the acoustic model, i.e., and , can be updated if each hidden state is determined for each . During the SIR process for sampling a letter sequence, in Algorithm 1 are subsidiarily obtained. To accelerate the mixing rate, the subsidiary sampling results obtained in the SIR are used for updating the acoustic model parameters. These parameters can be sampled in the same way as the HDP-HSMM. For more details, we refer to the original paper in which the HDP-HSMM were introduced [6]. Finally, the overall sampling procedure is obtained, as described in Algorithm 1.

  Initialize all parameters.
  Observe time series data .
  repeat
     for  to  do
        // Backward filtering procedure
        For each , initialize messages .
        for  to  do
           For each , compute backward messages and using (25)–(28).
        end for
        // Forward sampling procedure
        Initialize and
        while  do
           // Sampling a super state representing a latent word
           
           // Sampling duration of the super state
           
           
           
        end while
        
        // Sampling a tentative latent letter sequences
        for  to  do
           
        end for
     end for
     // Update model parameters
     Sample acoustic model parameters on the basis of tentatively sampled latent letter sequences .
     Sample language model parameter on the basis of sampled super states , i.e., latent words.
     Sample a word inventory using SIR procedure (see (37)).
     Sample a word model on the basis of sampled word inventory .
  until a predetermined exit condition is satisfied.
Algorithm 1 Blocked Gibbs sampler for HDP-HLM

Iv-D Npb-Daa

Based on the generative model, HDP-HLM, and its inference algorithm shown in Algorithm 1, the proposed NPB-DAA is obtained, finally. By assuming HDP-HLM as a generative model of observed time series data, and by inferring latent variables of the model, we can analyze latent double articulation structure, i.e., hierarchically organized latent words and phonemes, of the data in an unsupervised manner. We call the novel unsupervised double articulation analyzer NPB-DAA.

V Experiment 1: Synthetic data

We conducted an experiment using a synthetic dataset that explicitly has a double articulation structure to validate our proposed method.

V-a Conditions

To validate the ability of our proposed method to infer a latent double articulation structure in time series data, we applied the proposed NPB-DAA based on the HDP-HLM to synthetic time series data. The conventional DAA was employed as a comparative method. The time series data are generated using five letters and four words where is a set of letters and is a set of words. The four words were generated randomly. The sequence represents a word that is generated by combining sequentially where denotes the -th letter of . The durations of the letters were assumed to follow Poisson distributions and their parameters were drawn from a Gamma distribution whose parameters were . The emission distribution was assumed to be a Gaussian distribution whose parameters were , where represents the index of latent letters. The variance of the emission distribution was changed in stages, and the inference results were compared. Forty time series data items were generated from 20 types of latent word sequences. Sixteen of them were pairs of words in , e.g., , and . Four of them were three-word sentences, e.g., . A sequence of latent words is represented by . Two observations were generated from each word sequence.

We set the parameters of the NPB-DAA as follows: the hyperparameters for the latent language model were , and the maximum number of words was six for weak-limit approximation. The hyperparameters for the latent word model were , and the maximum number of letters was seven for weak-limit approximation. The hyperparameters of the duration distributions were set to and , and those of the emission distributions were set to . The Gibbs Sampling procedure was iterated 100 times.

For the conventional DAA, we set the hyperparameters of the sticky HDP-HMM to be as similar to those of the NPB-DAA as possible. In this condition, the latticelm software111latticelm: http://www.phontron.com/latticelm/index.html developed by Neubig et al. was used for NPYLM. The hyperparameters of the NPYLM used in the conventional DAA were set to and .

The hyperparameters in the NPB-DAA were heuristically given in a top-down manner by referring to the size of the state space and the approximate duration of a phoneme. Those of the Pitman-Yor language model were set to the default values of the software.

V-B Results

The average log-likelihood is shown in Fig. 4, where error bars represent the standard deviation of 30 trials. These results show that the proposed inference procedure worked appropriately, gradually sampling more probable latent variables as the iterations increased.

In contrast with ordinal speech recognition tasks, the target task (language acquisition and double articulation analysis) is an unsupervised learning task. Specifically, it is a clustering task. Therefore, it is difficult to evaluate the methods’ performance from the viewpoint of precision and recall because the estimated index of a cluster and the label corresponding to the ground truth data are usually different. We evaluated the obtained result using the adjusted rand index (ARI), which quantifies the performance of a clustering task  [47]. If all data items are clustered randomly or only to one cluster, the ARI becomes . By contrast, if the results of clustering are the same as those of the ground truth data, the ARI becomes .

Table II shows the ARI for the estimated latent letters. The ARI for estimated latent letters shows how accurately each method estimated latent letters, which correspond to phonemes in speech signals. Table II shows the ARI for estimated latent words. The ARI for estimated latent words shows how accurately each method estimated latent letters, which correspond to words in speech signals. In both tables, each column shows ARIs for different . A higher ARI implies more accurate estimation of the latent variables.

Although the ARI for the latent letters obtained by conventional DAA decreases when the variance increases, that of NPB-DAA did not decrease as much. As the ARIs for latent words show, the performance of word segmentation by conventional DAA was poor, even when the ARI for latent letters was larger than . In contrast, the ARI for latent words estimated by NPB-DAA was over in all conditions. This shows that the NPB-DAA can mitigate the ill effects of phoneme recognition errors in the word segmentation task, and obtained knowledge about words can improve phoneme recognition performance by using contextual information. Fig. 5 shows the change in ARI through iterations in the case of . This shows that the ARI also increased gradually while log likelihood increases, as in Fig. 4. These results suggest that the NPB-DAA is an appropriate generative model because better word segmentation performance corresponded to higher likelihood of the model.

To check the effects of the limit on weak-limit approximation, we ran an experiment where the maximum number of letters was 20 for weak-limit approximation. The ARI for the estimated latent words were , those for estimated latent letters were , and the estimated number of latent letters were on average for . This result shows that our model can work appropriately to estimate the number of latent states owing to the nature of Bayesian nonparametrics when the limit is sufficiently large.

An example of estimated latent variables is shown in Fig. 6, which shows the results for time series data generated from the latent word sequence . The input time series data is shown at very top of the figure. The top of each panel shows the true latent letters or latent words, whereas the panel beneath shows the inferred results. The vertical axes represent the iteration of the Gibbs sampling. In Fig. 6, the figure in the middle shows a latent word sequence estimated using the proposed method, and the figure at the bottom shows the estimated boundaries of the latent words. These results show that the inference procedure works consistently and can estimate an adequate boundary for the latent words given the data.

Fig. 4: Log-likelihood profile through Gibbs sampling ()
Fig. 5: ARI profile through Gibbs sampling ()
Fig. 6: Example of inference results for sample data and : (top) observation data, (upper middle) latent letters, (lower middle) latent words, and (bottom) the boundaries of latent words. Different colors denote different states.
0.1 0.5 1.0
Conventional DAA (sticky HDP-HMM) 0.845 0.832 0.649
NPB-DAA 0.984 0.895 0.938
TABLE II: ARI for estimated latent words
Conventional DAA (sticky HDP-HMM + NPYLM) 0.122 0.107 0.125
NPB-DAA 0.594 0.509 0.618
TABLE I: ARI for estimated latent letters

These results show that the proposed method is a more effective machine learning method for estimating a latent double articulation structure embedded in time series data.

Vi Experiment 2: Continuous Japanese Vowel Speech Signal

In the second experiment, we evaluated our proposed method using Japanese vowel speech signals to test the applicability of the proposed method to actual human continuous speech signal.

Vi-a Conditions

We prepared four datasets. Each dataset corresponds to a speaker, and consisted of 60 audio data items. We asked two male and two female Japanese speakers to read 30 artificial sentences aloud two times at a natural speed, and recorded his/her voice. The 30 sentences were prepared using five words {aioi, aue, ao, ie, uo}, which consisted of five Japanese vowels {a, i, u, e, o} representing {ä, i, WB, ∥‘e, ∥‘o} in phonetic symbols respectively. By reordering the five words, we prepared 25 two-word sentences, e.g., “ao aioi,” “uo aue,” and “aioi aioi,” and five three-word sentences, i.e., “uo aue ie,” “ie ie uo,” “aue ao ie,” “ao ie ao,” and “aioi uo ie.” The set of two-word sentences consisted of all types of word pairs (). The set of three-word sentences were generated randomly.

The recorded data were encoded into -dimensional mel-frequency cepstrum coefficient (MFCC) time series data using the HMM Toolkit (HTK)222Hidden Markov Model Toolkit: http://htk.eng.cam.ac.uk/. The frame size and shift were set to and ms, respectively. Twelve-dimensional MFCC data was obtained as input data by eliminating power information from the original 13-dimensional MFCC data. As a result, 12-dimensional time series data at a frame rate of Hz were obtained.

The hyperparameters for the latent language model were set to and , and the maximum number of words was set to seven for weak-limit approximation. The hyperparameters for the latent word model were and , and the maximum number of letters was seven for weak-limit approximation. The hyperparameters of the duration distributions were set to and , and those of the emission distributions were set to and dimension.

For the conventional DAA, we set the hyperparameters of the sticky HDP-HMM to be as similar to those of the NPB-DAA as possible. The hyperparameters for the NPYLM used in the conventional DAA were set to and . The Gibbs sampling procedure was iterated 100 times. With different random number seeds, 20 trials were performed.

The parameters in the NPB-DAA were given in a top-down manner heuristically by referring to the size of the state space and the approximate duration of a phoneme. Those of the Pitman-Yor language model were set to the default values of the software.

As a baseline method, we employed an open-source continuous speech recognition engine, Julius,333Open-Source Large Vocabulary CSR Engine Julius: http://julius.sourceforge.jp/. The Linux binary dictation-kit-v4.3.1-linux.tgz was used in this experiment. The software encodes the recorded data into 36-dimensional MFCC data including dynamic features and uses them for speech recognition. which is widely used in Japanese speech recognition tasks. Julius’s acoustic model is trained by using a large number of speech data in a supervised manner. We prepared four conditions for Julius. The first one was called “Julius (phoneme + NPYLM).” In this condition, we used Julius as a phoneme recognition system by preparing a phoneme dictionary containing five Japanese vowels {a, i, u, e, o}. Moreover, Julius’s dictionary also contains silB and silE to represent silence due to system requirements. After encoding continuous speech signals into phoneme sequences using Julius as a phoneme recognizer, unsupervised morphological analysis based on the NPYLM was conducted to discover words and a language model. The second condition was called “Julius (phoneme + latticelm).” In this condition, we also used latticelm, which is an unsupervised morphological analyzer for lattice output from an ASR system. The method was proposed by Neubig et al. as an extension of Mochihashi’s NPYLM [24]. In this condition, the latticelm software was used too.

In the third and fourth conditions, called “Julius (monophone + word dictionary)” and “Julius (triphone + word dictionary),” respectively, we prepared a complete word dictionary that contained all of the words that appeared in the target speech signal, i.e.,{aioi, aue, ao, ie, uo}, for Julius. This condition provides almost an upper bound for the performance of our task. Except for in “Julius (triphone + word dictionary),” Julius uses a monophone-based acoustic model contained in the dictation kit. The acoustic model is trained in a supervised manner using a large number of labeled speech data. “Julius (triphone + word dictionary)” used a triphone-based acoustic model for comparison.

Vi-B Results

We provided word and letter ground truth labels to all frames of the speech signal data and evaluated the relationship between the truth labels and estimated latent letter and word indices.

Method Letter ARI Word ARI AM LM
NPB-DAA (MAP) 0.596 0.529
NPB-DAA 0.561 0.401
Conventional DAA 0.590 0.090
Julius (phoneme dictionary + NPYLM) 0.486 0.297
Julius (phoneme dictionary + latticelm) 0.554 0.337
Julius (monophone + word dictionary) 0.586 0.487
Julius (triphone + word dictionary) 0.548 0.616
TABLE III: ARI for estimated latent letters and words

The results are shown in Table III. Check marks in the AM and LM columns indicate that the method used a pretrained acoustic model (AM) and the given true language model (LM), respectively. Letter ARI shows the ARI of phoneme clustering. A high Letter ARI means more accurate phoneme acquisition and recognition. Word ARI shows the ARI of word clustering. A higher Word ARI means more accurate word discovery and recognition. Each row corresponds to each method explained in the conditions. The results of “NPB-DAA” and “Conventional DAA” show the ARI averaged over 20 trials. In contrast, “NPB-DAA (MAP)” obtained the maximum a posteriori probability (MAP) of the 20 trials. An advantage of the NPB-DAA is that the method can calculate the posterior probability of a given dataset after the learning phase because the NPB-DAA is derived from a generative model, i.e., HDP-HLM, which integrates the language and acoustic models. In contrast with the conventional DAA and similar methods that do not have appropriate generative models, the NPB-DAA can obtain an appropriate learning result by referring to the probability. The rows with MAP in Table III show that this probability is an adequate criterion for selecting a learning result.

The results show that the “NPB-DAA (MAP)” outperformed not only the conventional DAA but also Julius-based word discovery systems whose acoustic models were trained in supervised manner. One reason is that the acoustic models of the DAAs were trained only from one participant’s speech signals, in contrast, Julius’s acoustic model was trained by the speech signals of many speakers. In other words, NPB-DAA acquired speaker-dependent acoustic model in contrast with that Julius used speaker-independent acoustic model. This adaptation of acoustic model to the speaker must have increased the NPB-DAA’s performance.

The results show that a naive application of the NPYLM to recognized phoneme sequences results in poor word acquisition performance, especially in conventional DAA. Because the theory of the NPYLM does not presume that letter sequences have recognition errors, the existence of phoneme recognition error deteriorates word segmentation performance. The methods that simply apply an NPYLM to obtained phoneme sequences, i.e., the conventional DAA and Julius (phoneme dictionary + NPYLM), output bad results in the word ARI compared with those of the letter ARI. However, latticelm, which presumes phoneme recognition errors to some extent, could not dramatically improve the performance of word acquisition in our experimental setting.

In contrast, “Julius (triphone + word dictionary)” improved its word ARI performance with respect to letter ARI performance. “Julius (monophone + word dictionary)” also kept its performance high with respect to the word recognition task compared with the phoneme recognition task. We note that the word error rate was 32.8% and the phoneme error rate was 28.1% in Julius (monophone + word dictionary).

In the research field of ASR, it is widely known that a good language model improves word and phoneme recognition performance. The NPB-DAA could not improve the performance of word ARI with respect to letter ARI performance. However, it obtained an adequate language model and prevented the score of the word ARI from becoming far worse than that of the letter ARI. To achieve such an error-proof word acquisition, the direct inference of latent words are important in NPB-DAA. In the inference procedure described in Section III, latent words are sampled directly without sampling latent letters while marginalizing all possible latent letter sequences. This achieves an effect similar to that of a given language model in the inference process

Typical examples of the estimation results are shown in Table IV for NPB-DAA and conventional DAA. Each number in parentheses represents an estimated phoneme label, each space represents a phoneme boundary, each number in bold style represents a sampled index of a word, and “” represents a boundary between successive words. For example, “ao ie” was divided into two words, i.e., “5 0 1” and “6 3 4 6,” in the NPB-DAA results, and their word indices were 3 and 4. In Table IV, the sampled letters corresponding to the word “ie” are underlined. Although conventional DAA could not estimate “ie” as a single word, the NPB-DAA could estimate “ie” to be a single word: “4.” In the conventional DAA results, several phoneme recognition errors can be found. The errors completely deteriorated the following chunking process, i.e., unsupervised morphological analysis using a NPYLM, as past research has frequently pointed out. As shown in Table IV, NPB-DAA had some phoneme recognition errors. However, in the NPB-DAA, latent words are sampled on the basis of the marginalized phoneme distribution before sampling concrete phoneme sequences. This property of the sampling procedure seemed to improve the performance of NPB-DAA.

Vowel sequence Estimated NPB-DAA results Estimated conventional DAA results
ao ie 3 (5 0 1) / 4 (6 3 4 6) 226 (2 0 3 4 1 5 4 1)
ao ie ao 3 (5 0 1) / 4 (6 3 4 6) / 3 (5 0 1) / 0 (6 4 6) 494 (3) / 675 (2 3 0) / 374 ( 1 5 4 1 2 0 1)
aue ie 6 (6 5 1 2 6 4) / 4 (6 3 4 6) 329 (2 3 8 4 5 4 1)
ie ie 4 (6 3 4 6) / 4 (6 3 4 6) 389 ( 5 4 1 4 1 5 4 1)
ie uo 4 (6 3 4 6) / 5 (5 1 2) / 3 (5 0 1) 401 ( 5 4 1 8 0 1)
ie aioi 4 (6 3 4 6) / 1 (5 6 4 6 3 6 1) / 4 (6 3 4 6) 813 ( 5 4 1 2 4 5) / 832 (4 3 0 3 4 5 1)
TABLE IV: Example word discovery results

An example of the estimated latent variables is shown in Fig. 7, which shows the results for time series data corresponding to a vowel sequence, “ao ie ao.” The input time series data, i.e., 12-dimensional MFCC time series data, are shown at the top of the figures. The middle and the bottom figures show the inference process. The top of each figure shows the true latent letters or latent words, whereas the bottom shows the inferred result. The vertical axes represent the number of Gibbs sampling iterations. This shows that the inference procedure worked for human vowel sequence data, and could estimate an adequate unit for each word.

Fig. 7: Example of inference results for “ao ie ao.” MFCC feature vectors are plotted in the top panel. The middle and bottom panels show the inference results of latent letters and latent words, respectively. Different colors denotes different states.

Let us further examine the characteristics of the segmentation results of the NPB-DAA. Table IV shows that some of the estimated latent words have a latent letter “6” at their head or tail. The latent letter “6” represents silence observed during the transition from one vowel to another. Silence in speech signals and the transitional sounds observed between two phonemes were treated in the same manner as other uttered sounds in our model. The question of whether such signals should be treated in the same way as other sounds in a generative model calls for further investigation. In our model, a phoneme is simply represented by a single Gaussian distribution, although many past speech recognition systems assign a richer structure to a phoneme, e.g., a three-state left-to-right HMM with GMM emission distributions. There is room for investigating whether a phoneme model, i.e., a latent letter, should itself have a more complex structure, or if a double articulation hierarchy is sufficient from the viewpoint of unsupervised word discovery tasks.

An interesting result that represents a characteristic of the NPB-DAA is the latent word “4 (6 3 4 6)” estimated at the end of “ie aioi.” The speech signals corresponding to this “4” were a kind of transitional sound observed following “aioi.” The NPB-DAA directly inferred the latent word by marginalizing latent letters. In this case, it seems that “4” was more likely than other latent words, and the NPB-DAA hence generated this result. This can be regarded as a side effect of our approach, i.e., the marginalization of latent letter sequences in a latent word. We are confident that the marginalization of latent letters and the direct inference of word sequences are important to improving the performance of the unsupervised word segmentation of continuous speech signals, but there is room to consider this side effect.

Note that the NPB-DAA performed unsupervised word discovery under the condition that the training data consisted of speech signals uttered by one speaker, in contrast with Julius, whose acoustic model was trained using many speakers’ speech signals. Speaker-independent, unsupervised word discovery from continuous speech signals remains a challenging problem because the acoustic features of phonemes heavily depend on the speaker. When we gave four speakers’ speech signals to the NPB-DAA at the same time, the Letter ARI and the Word ARI decreased to and , respectively. By contrast, those produced by Julius with a triphone acoustic model and a true word dictionary were and , respectively. In the experiment, 120 audio data items that were recorded by asking two male and two female Japanese speakers to read 30 artificial sentences were used, i.e., a half of the data items used in the main experiment due to computational cost. It was observed that speaker “dependent” phoneme models were obtained by the NPB-DAA, i.e., speech signals representing the same phoneme uttered by deferent persons tended to be clustered to different latent letters. To develop a machine learning method that enables a robot to obtain language and acoustic models independent of speakers, or automatically adapting to different speakers is one of our future challenges.

Vii Conclusion

In this paper, we proposed NPB-DAA for direct and simultaneous acquisition of language and acoustic models from continuous speech signals in an unsupervised manner. For this purpose, we proposed an integrative generative model called the HDP-HLM by extending HDP-HSMM. Based on the generative model, we derived an inference procedure by extending the blocked Gibbs sampler originally proposed for HDP-HSMM. The method is expected to enable a developmental robot to simultaneously obtain language and acoustic models directly from continuous speech signals. To evaluate the performance of the proposed method, two experiments were performed. In the first experiment, the proposed method was applied to synthetic data, and it was shown that the method can successfully infer latent words embedded in time series data in an unsupervised manner. In the second experiment, we applied the proposed method to actual human Japanese vowel sequences. The result showed that the proposed method outperformed a conventional two-stage sequential method, conventional DAA, and a baseline ASR method.

One of the most important challenges in our future work is to achieve complete human language acquisition from speech signals. We did not achieve complete language acquisition from speech signals that includes consonants as well as vowels in this study. Language acquisition from more natural speech signals like child-directed speech by human parents are also part of our future work. To achieve these aims, we still have two main problems: feature extraction and computational cost.

To address these problems, more sophisticated feature extraction methods are needed. Deep learning has gained attention recently because of its impressive feature extraction performance. Integrating a deep learning method into the NPB-DAA should improve its performance.

Computational cost is another problem. Even though the size of the dataset used in the Experiment 2 was very small, it took approximately 240 minutes for 100 iterations using an Intel Xeon CPU E5-2650 v2 2.60 GHz, 8 cores 16 CPU. In particular, the computational cost of the blocked Gibbs sampler was , where is the maximum number of latent letters for a word, is the maximum duration of a word, and is the maximum number of words. To apply the proposed method to a larger dataset, improving its computational cost will be necessary.

Currently, the accuracy of the language acquisition is still limited, as shown in Table III. In this paper, we focused on a language acquisition method based on distributional cues and proposed a mathematical model for language acquisition. Obviously, distributional cues are not enough for more accurate language acquisition. As suggested by several computational and robotic studies, making use of co-occurrence cues improves the accuracy of language acquisition [21, 23, 22]. The proposed HDP-HLM is a fully probabilistic generative model. Therefore, introducing other factors into consideration is relatively easier than for other heuristic models. This is also advantage of our approach. Combining prosodic and co-occurrence cues into the NPB-DAA, and obtaining a more accurate and more plausible constructive developmental language acquisition model is also a direction for future research.

References

  • [1] R. N. Aslin, J. Z. Woodward, N. P. LaMendola, and T. G. Bever, “Models of word segmentation in fluent maternal speech to infants,” in Signal to syntax: Bootstrapping from speech to grammar in early acquisition, J. L. Morgan and K. Demuth, Eds.   Psychology Press, 1995, pp. 117—-134.
  • [2] J. R. Saffran, E. L. Newport, and R. N. Aslin, “Word Segmentation: The Role of Distributional Cues,” Journal of Memory and Language, vol. 35, no. 4, pp. 606–621, 1996.
  • [3] J. R. Saffran, R. N. Aslin, and E. L. Newport, “Statistical learning by 8-month-old infants.” Science, vol. 274, no. 5294, pp. 1926–1928, 1996.
  • [4] E. D. Thiessen and J. R. Saffran, “When cues collide: use of stress and statistical cues to word boundaries by 7- to 9-month-old infants.” Developmental psychology, vol. 39, no. 4, pp. 706–716, 2003.
  • [5] P. K. Kuhl, “Cracking the speech code: How infants learn language,” Acoustical Science and Technology, vol. 28, no. 2, pp. 71–83, 2007.
  • [6] M. J. Johnson and A. S. Willsky, “Bayesian nonparametric hidden semi-Markov models,” Journal of Machine Learning Research, vol. 14, pp. 673–701, February 2013.
  • [7] M. R. Brent, “An efficient, probabilistically sound algorithm for segmentation and word discovery,” Machine Learning, vol. 34, pp. 71–105, 1999.
  • [8] A. Venkataraman, “A statistical model for word discovery in transcribed speech,” Computational Linguistics, vol. 27, no. 3, pp. 351–372, 2001.
  • [9] S. Goldwater, T. L. Griffiths, M. Johnson, and T. Griffiths, “Contextual dependencies in unsupervised word segmentation,” in Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, 2006, pp. 673–680.
  • [10] S. Goldwater, T. L. Griffiths, and M. Johnson, “A Bayesian framework for word segmentation: exploring the effects of context,” Cognition, vol. 112, no. 1, pp. 21–54, Jul. 2009.
  • [11] D. Mochihashi, T. Yamada, and N. Ueda, “Bayesian unsupervised word segmentation with nested Pitman-Yor language modeling,” in Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACL-IJCNLP), 2009, pp. 100–108.
  • [12] M. Johnson and S. Goldwater, “Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars,” in Proceedings of Human Language Technologies The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2009, pp. 317–325.
  • [13] M. Chen, B. Chang, and W. Pei, “A joint model for unsupervised Chinese word segmentation,” in Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 854–863.
  • [14] P. Magistry, “Unsupervized word segmentation : the case for Mandarin Chinese,” in Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers, vol. 2, 2012, pp. 383–387.
  • [15] S. Sakti, A. Finch, R. Isotani, H. Kawai, and S. Nakamura, “Unsupervised determination of efficient Korean LVCSR units using a Bayesian Dirichlet process model,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pp. 4664–4667.
  • [16] D. K. Roy and A. P. Pentland, “Learning words from sights and sounds: a computational model,” Cognitive Science, vol. 26, no. 1, pp. 113–146, 2002.
  • [17] N. Iwahashi, “Interactive learning of spoken words and their meanings through an audio-visual interface,” IEICE Transactions on Information and Systems, no. 2, pp. 312–321, 2008.
  • [18] ——, “Language acquisition through a human-robot interface by combining speech, visual, and behavioral information,” Information Sciences, vol. 156, pp. 109–121, 2003.
  • [19] N. Iwahashi, K. Sugiura, R. Taguchi, T. Nagai, and T. Taniguchi, “Robots that learn to communicate: A developmental approach to personally and physically situated human-robot conversations,” in Dialog with Robots Papers from the AAAI Fall Symposium, 2010, pp. 38–43.
  • [20] T. Araki, T. Nakamura, T. Nagai, S. Nagasaka, T. Taniguchi, and N. Iwahashi, “Online learning of concepts and words using multimodal LDA and hierarchical Pitman-Yor Language Model,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1623–1630, Oct. 2012.
  • [21] T. Nakamura, T. Nagai, K. Funakoshi, S. Nagasaka, T. Taniguchi, and N. Iwahashi, “Mutual learning of an object concept and language model based on MLDA and NPYLM,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014, pp. 600 – 607.
  • [22] R. Taguchi, Y. Yamada, K. Hattori, T. Umezaki, and M. Hoguro, “Learning place-names from spoken utterances and localization results by mobile robot,” in Interspeech, 2011, pp. 1325–1328.
  • [23] A. Taniguchi, T. Taniguchi, and T. Inamura, “Lexical acquisition related to places of mobile robot based on ambiguous syllable recognition,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, submitted.
  • [24] G. Neubig, M. Mimura, S. Mori, and T. Kawahara, “Bayesian learning of a language model from continuous speech,” IEICE Transactions on Information and Systems, vol. E95-D, no. 2, pp. 614–625, 2012.
  • [25] J. Heymann and O. Walter, “Unsupervised word segmentation from noisy input,” in IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 2013, pp. 458–463.
  • [26] J. Heymann, O. Walter, R. Haeb-umbach, and B. Raj, “Iterative bayesian word segmentation for unsupervised vocabulary discovery from phoneme lattices,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, pp. 4085–4089.
  • [27] M. Elsner, S. Goldwater, N. Feldman, and F. Wood, “A joint learning model of word segmentation, lexical acquisition, and phonetic variability,” in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA, 2013, pp. 42–54.
  • [28] B. M. Lake, G. K. Vallabha, and J. L. McClelland, “Modeling unsupervised peceptual category learning,” IEEE Transactions on Autnomous Mental Development, vol. 1, no. 1, pp. 35–43, 2009.
  • [29] N. H. Feldman, T. L. Griffiths, S. Goldwater, and J. L. Morgan, “A role for the developing lexicon in phonetic category acquisition.” Psychological review, vol. 120, no. 4, pp. 751–78, 2013.
  • [30] C. Lee and J. Glass, “A nonparametric Bayesian approach to acoustic model discovery,” in Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers, 2012, pp. 40–49.
  • [31] C.-y. Lee, Y. Zhang, and J. Glass, “Joint learning of phonetic units and word pronunciations for ASR,” in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 2013, pp. 182–192.
  • [32] H. Brandl, B. Wrede, F. Joublin, and C. Goerick, “A self-referential childlike model to acquire phones, syllables and words from acoustic speech,” IEEE International Conference on Development and Learning, pp. 31–36, Aug. 2008.
  • [33] O. Walter, T. Korthals, R. Haeb-Umbach, and B. Raj, “A hierarchical system for word discovery exploiting DTW-based initialization,” in IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 2013, pp. 386–391.
  • [34] T. Taniguchi and S. Nagasaka, “Double articulation analyzer for unsegmented human motion using Pitman-Yor language model and infinite hidden Markov model,” in IEEE/SICE International Symposium on System Integration (SII), 2011, pp. 250–255.
  • [35] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky, “A sticky HDP-HMM with application to speaker diarization,” The Annals of Applied Statistics, vol. 5, no. 2A, pp. 1020–1056, 2009.
  • [36] K. Takenaka, T. Bando, S. Nagasaka, T. Taniguchi, and K. Hitomi, “Contextual scene segmentation of driving behavior based on double articulation analyzer.” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012, pp. 4847–4852.
  • [37] T. Taniguchi, S. Nagasaka, K. Hitomi, N. P. Chandrasiri, and T. Bando, “Semiotic prediction of driving behavior using unsupervised double articulation analyzer,” in IEEE Intelligent Vehicles Symposium (IV), 2012, pp. 849–854.
  • [38] T. Taniguchia, S. Nagasaka, K. Hitomi, K. Takenaka, and T. Bando, “Unsupervised hierarchical modeling of driving behavior and prediction of contextual changing points,” IEEE Transactions on Intelligent Transportation Systems, vol. PP, pp. 1–15, 2014, in press.
  • [39] S. Nagasaka, T. Taniguchi, G. Yamashita, K. Hitomi, and T. Bando, “Finding meaningful robust chunks from driving behavior based on double articulation analyzer,” in IEEE/SICE Intl Symposium on System Integration (SII), 2012, pp. 535–540.
  • [40] T. Bando, K. Takenaka, S. Nagasaka, and T. Taniguchi, “Unsupervised drive topic finding from driving behavioral data,” in IEEE Intelligent Vehicles Symposium (IV), 2013, pp. 177–182.
  • [41] ——, “Automatic drive annotation via multimodal latent topic model,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, pp. 2744–2749.
  • [42] K. Takenaka, T. Bando, S. Nagasaka, and T. Taniguchi, “Drive video summarization based on double articulation structure of driving behavior,” in ACM Multimedia, 2012, pp. 1169–1172.
  • [43] K. P. Murphy, “Hidden semi-Markov models (HSMMs),” Tech. Rep. November, 2002. [Online]. Available: http://www.cs.ubc.ca/~murphyk/Papers/segment.pdf
  • [44] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei, “Hierarchical Dirichlet processes,” Journal of the american statistical association, vol. 101, no. 476, 2006.
  • [45] J. Sethuraman, “A constructive definition of Dirichlet priors,” Statistica Sinica, vol. 4, no. 2, pp. 639–650, 1994.
  • [46] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (Intelligent Robotics and Autonomous Agents series).   The MIT Press, 2005.
  • [47] L. Hubert and P. Arabie, “Comparing partitions,” Journal of classification, vol. 2, no. 1, pp. 193–218, 1985.

ad

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
151166
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description