Generation and Pruning of Pronunciation Variants to Improve ASR Accuracy

Generation and Pruning of Pronunciation Variants to Improve ASR Accuracy

Abstract

Speech recognition, especially name recognition, is widely used in phone services such as company directory dialers, stock quote providers or location finders. It is usually challenging due to pronunciation variations. This paper proposes an efficient and robust data-driven technique which automatically learns acceptable word pronunciations and updates the pronunciation dictionary to build a better lexicon without affecting recognition of other words similar to the target word. It generalizes well on datasets with various sizes, and reduces the error rate on a database with 13000+ human names by 42%, compared to a baseline with regular dictionaries already covering canonical pronunciations of 97%+ words in names, plus a well-trained spelling-to-pronunciation (STP) engine.

Generation and Pruning of Pronunciation Variants to Improve ASR Accuracy

Zhenhao Ge, Aravind Ganapathiraju, Ananth N. Iyer, Scott A. Randal, Felix I. Wyss
Interactive Intelligence Inc., Indianapolis, Indiana, USA
{roger.ge, aravind.ganapathiraju, ananth.iyer, scott.randal, felix.wyss}@inin.com


Index Terms: pronunciation learning, ASR, name recognition, grammar, lexicon

1 Introduction

Grammar-based Automatic Speech Recognition (ASR), can be challenging due to variation of pronunciations. These variations can be pronunciations of native words from non-natives, or pronunciations of imported non-native words from natives, or it may be caused by uncommon spelling of some special words. Many techniques have been tried to address this challenge, such as weighted speaker clustering, massive adaptation, and adaptive pronunciation modeling [1].

Words specified in the grammar have their baseline pronunciations either covered in regular dictionaries, such as 1) prototype dictionaries for the most common words, and 2) linguist hand-crafted dictionaries for the less common words, or 3) generated using a spelling-to-pronunciation (STP)/grapheme-to-phoneme (G2P) engine with a set of rules for special words. These baseline pronunciations sometimes have a big mismatch for the “difficult” words and may cause recognition errors, and ASR accuracy can be significantly improved if more suitable pronunciations can be learned. However, blindly generating variants of word pronunciation, though it can increase the recognition rate for that particular word, will reduce accuracy recognizing potential “similar” words, which are close to the target word in the pronunciation space.

There are various ways to learn pronunciations [2, 3] and here we propose a novel efficient algorithm. The goal of this algorithm is two-fold. For each target word: a) select the best set of alternate pronunciations from a candidate set originating from the baseline pronunciation; b) avoid any “side-effect” on neighboring words in pronunciation space. This is achieved by maximizing the overall recognition accuracy of a word set containing the target word and its neighboring words. A pronunciation variant generation and searching process is developed, which further performs sorting and pruning to limit the number of total accepted pronunciations for each word.

Beaufays et al. used probability models to suggest alternative pronunciations by changing one phoneme at a time [4]. Reveil et al. adds pronunciation variants to a baseline lexicon using multiple phoneme-to-phoneme (P2P) converters with different features and rules [5]. Compared to these methods, the proposed technique is more efficient and allows searching in a much wider space without affecting accuracy.

The work was initiated during the first author’s internship at Interactive Intelligence (ININ) [6], and later improved in terms of accuracy, efficiency and flexibility. This paper is organized as follows: Sec. 2 describes the database, Sec. 3 provides an overview of the grammar-based name recognition framework; Sec. 4 introduces some preliminary knowledge which faciliates the explanation of the pronunciation learning algorithm in Sec. 5; Sec. 6 provides some heuristics to improve efficiency in implementing pronunciation learning, followed by the results and conclusions in Sec. 7.

2 Data

This work used the ININ company directory database, which contains human names (concatenation of 2,3, or 4 words), intentionally collected from 2 phases for pronunciation learning (training) and accuracy improvement evaluation (testing) respectively. They share the same pool of 13875 names, and Tab. 1 lists the statistics and baseline accuracies. Names were pronounced in English by speakers from multiple regions and countries. They were asked to read a list of native and non-native names with random repetitions. Then, the audio was segmented into recordings of individual names. The reduction in Name Error Rate (NER) from phase 1 to phase 2 was mainly because the latter were recorded in cleaner channels with less packet loss, and better corpus creation methods.

Database Grammar No. of Unique No. of Name NER
Size Names (Incorrect) Instances (Incorrect) (%)
phase 1 13875 12419 (5307) 38806 (8083) 20.83
phase 2 13875 12662 (3998) 42055 (6043) 14.37
Table 1: ININ name databases with baseline accuracies

Recognition is normally more challenging when the grammar size increases, since names are more dense in the pronunciation space and more easily confused with others. Here NER is evaluated with a growing grammar size , and data in both phases were randomly segmented into subsets , with , where the larger subset always includes the smaller one.

3 Overview of Grammar-based ASR

Figure 1: Structure of grammar-based name recognition

Grammar-based ASR is used to recognize input speech as one of the entries specified in the grammar file. For example, if the grammar contains various names, the input speech will be recognized as one of the most likely name or the system will report “no match”, if it is not close to any name. This work used Interaction Speech Recognition®, a grammar-based ASR developed at ININ. Fig. 1 illustrates the main components with both acoustic and language resources. The acoustic information is modeled as Hidden Markov Model-Gaussian Mixture Model (HMM-GMM). The front end for this systems uses Mel-Frequency Cepstral Coefficients (MFCCs) transformed using Linear Discriminant Analysis (LDA). The language resource is provided as a name grammar according to the speech recognition grammar specification (SRGS) [7]. The linguistic information is encoded using a lexicon containing text normalizers, pronunciation dictionaries, and a decision-tree-based spelling-to-pronunciation (STP) predictor. The work here updates the pronunciation dictionaries by adding learned pronunciations to build a better lexicon for recognition.

4 Preliminaries

To better describe the pronunciation learning algorithm in Sec. 5, this section introduces three preliminary interrelated concepts, including a) confusion matrix, b) pronunciation space and distance, and c) generation of candidate pronunciations.

4.1 Confusion Matrix

This work used the Arpabet phoneme set of 39 phonemes [8] to construct a confusion matrix . The value serves as a similarity measurement between phonemes and . The smaller the value, the more similar they are. It considers both acoustic and linguistic similarities and is formulated as:

(1)

To construct , phoneme alignment was performed on the Wall Street Journal (WSJ) corpus to find the average log-likelihood of recognizing phoneme as . These values were then sign-flipped and normalized, so the diagonal values in are all zeros. is a symmetric binary matrix where if and are in the same linguistic cluster. The confusion between and is linguistically likely even though they may acoustically sound very different, and vice versa. Tab. 2 shows the 16 clusters defined by in-house linguists based on linguistic similarities. Using Eq. (1), the combined confusion matrix prioritizes the linguistic similarity, where the acoustic similarity is considered only when the phonemes are not in the same linguistic cluster.

1 iy, ih, ay, y 5 ey, eh 9 ae, aa, ao, ah, aw 13 ow, oy
2 uw, uh, w 6 er, r, l 10 p, b 14 t, d
3 k, g 7 f, v 11 s, z, sh, zh 15 ch, jh
4 m 8 n, ng 12 th, dh 16 hh
Table 2: Linguistic phoneme clusters

4.2 Pronunciation Space and Distance Measurement

Pronunciation space is spanned by all possible pronunciations (phoneme sequences). Sequences are considered as points in this space and the “distances” between them are computed using a confusion matrix . The distance between two pronunciations and , where are the lengths of phoneme sequences, is measured using Levenshtein distance with Dynamic Programming [9]. It is then normalized by the maximum length of these two, i.e., . For a database with grammar size , a name distance matrix is pre-computed before pronunciation learning, where indicates the distance between name and .

4.3 Generation of Candidate Pronunciations

pronunciation learning of a target word requires generating a pool of candidate pronunciations “around” the baseline pronunciation in the pronunciation space to search from. Given , where is the length of , by thresholding the phoneme in the confusion matrix with search radius , , one can find candidate phonemes (including itself, since ), which can be indexed in the range . Note that the phoneme symbol in are indexed in reverse order and the index of candidate phonemes for start from 0, rather than 1. This is intentional to make it easier to describe the candidate pronunciation indexing later in this section. Here we use the same search radius to search potential replacements for each phoneme, i.e., , where is experimentally determined by the workload (i.e. the number of misrecognized name instances) required for pronunciation learning.

After finding candidate phones for using search radius , the total number of candidate pronunciations () can be calculated by . For example, given the word paine with , here and there are candidate phonemes for . The phoneme candidates for substitution are listed in Tab. 3, and Tab. 4 shows all 16 () with repetition patterns underlined.

in
Number of candidates
Candidate index
Phoneme b (0) eh (0), ey (1) n (0)
candidate () p (1) iy (2), ih (3) ng (1)
Table 3: phoneme substitution candidates for
 
0 b eh n  8 p eh n
1 b eh ng  9 p eh ng
2 b ey n  10 p ey n
3 b ey ng  11 p ey ng
4 b iy n  12 p iy n
5 b iy ng  13 p iy ng
6 b ih n  14 p ih n
7 b ih ng  15 p ih ng
Table 4: Candidate pronunciations of the word paine with their pronunciation and phoneme indices

Meanwhile, the distance from of to the farthest candidate pronunciation is defined as its outreach distance , which is later used to define the scope in finding ’s neighboring words. It is formulated below:

(2)

and is the candidate phoneme alternative for .

After generating candidate pronunciation list from using this method, fast one-to-one mapping/indexing between phoneme indices and pronunciation index , is essential for efficient candidate pronunciation lookup and pronunciation list segmentation based on the phoneme position index during pronunciation learning. Therefore, a pair of bi-directional mapping functions is provided in Eq. (3) and Eq. (4). For example, [p iy ng] can be indexed by both and .

(3)
(4)

The example shown above illustrates candidate pronunciation generation with phoneme replacement. This method can be easily extended to include more candidates with phoneme deletion, by introducing a special “void” phoneme. However, it does not handle phoneme insertion since it may include too many possible candidates.

5 Pronunciation Learning Algorithm

Pronunciation learning aims to find better alternative pronunciation for misrecognized names through a pronunciation generation and pruning process, which maximizes the accuracy improvement on a regional nameset including the target name and its nearby similar names . The learning is performed for all misrecognized names. However, it is only applied on a word basis, to the misrecognized words in the misrecognized names. The following subsections first introduce the main word pronunciation learning algorithm and then elaborate on the key components.

5.1 Algorithm Outline

  1. Set phoneme search radius , and upper bounds on the number of total pronunciations per name and per word .

  2. Perform baseline name recognition and collect all misrecognized name instances in .

  3. For each target name with error instances in :

    1. Compute its with and outreach distance in Eq. (2) to find its corresponding regional nameset .

    2. For each misrecognized word instance , find the best pronunciation using hierarchical pronunciation determination and get the accuracy increment on by adding into dictionary.

    3. Sort by and keep up to pronunciations in dictionary.

  4. For each target word with learned pronunciations:

    1. Find all names containing to form a nameset .

    2. Evaluate significance by their accuracy boost on and keep up to top pronunciations.

  5. Combine all for error words after pruning and replace with in dictionary.

5.2 Hierarchical Pronunciation Determination

Generally, given an input test name to the grammar-based ASR, it outputs a hypothesized name associated with the highest hypothesized score . However, if has multiple pronunciations, which one is actually used to yield is not provided for decoding efficiency (Fig. 2a). It is similar in the case of pronunciation learning (Fig. 2b). By providing massive number of for an ASR with single grammar (grammar contains only one name ), only the highest hypothesized score is yielded and the associated best pronunciation is not provided. In order to find from , hierarchical pronunciation determination with segmentation is used, by determining its phoneme one at a time.

Figure 2: Simplified demos of multi-gram and mono-gram ASR

For simplicity, an example to determine for the word paine is demonstrated. The same method applies to a name (concatenation of words) as well.

Figure 3: Hierachical pronunciation determination on the word paine

In Fig. 3, the phonemes in are determined in the order of , by tracking the segmentation with highest confidence score .

In general, given phoneme candidates for the phoneme of , is the number of pronunciations processed to determine the phoneme, and is the total number of pronunciations processed while learning the pronunciation for one word. In addition, the number of times running the recognizer is . Given that the computational cost of running the recognizer once is and processing each candidate pronunciation is , where , the total computational cost of running hierarchical pronunciation determination is approximately

(5)

For example, when determining phonemes in the order of (natural order) in Figure 3.

(6)

Since , is mainly determined by , i.e. the factor . Comparing with the brute-force method of evaluating candidate pronunciations one-by-one, associated with the factor , this algorithm is significantly faster.

6 Optimization in Implementation

6.1 Search Radius Reduction

In the step 3b of Sec. 5.1, if too many alternatives are generated for a particular word, due to the phoneme sequence length , search radius reduction is triggered to reduce the computational cost by decreasing the phoneme search radius form to . For example, the word desjardins with is a long word with , and phonemes {eh, s, zh , aa, iy, z} have more than 5 phoneme candidates each. The total number of is which requires much longer learning time than regular words. There are less than 20% of words in that triggered this. However, the average word pronunciation length was reduced from 20,204 to 11,941. Both and in are determined experimentally, here and . This method narrows pronunciation variants search to the more similar ones.

6.2 Phoneme Determination Order Optimization

Given are the number of phoneme candidates for phonemes in . Fig. 3 shows that the phonemes are determined in the natural order of , such as , and the total number of processed is . However, if they are determined in the descending order of , such as (), then the number of processed is minimized as (Fig. 4). Generally, it can be mathematically proven that

Figure 4: Hierachical pronunciation determination on the word paine with phonemes determined in descending order by number of candidates ()

7 Results and Conclusions

The improvement from baseline varies from the experimental settings, e.g. 1) how challenging the database is (percentage of uncommon words); 2) the dataset and grammar sizes; 3) the quality of audio recording and 4) the ASR acoustic modeling, etc. The namesets in Tab. 1 contain 13.4% uncommon words (4.6% native, 8.8% non-native). Tab. 5 and Fig. 5 show the baselines are already with competitive accuracies, since the dictionaries provide canonical pronunciations of 97%+ words, and the rest are generated from a well-trained STP with perfect-match accuracy 55% on 5800 words reserved for testing. The pronunciations learned from phase 1 update the lexicon and are tested in phase 2. All NERs grow when increases, and is much lower but grows slightly faster than the other two.

Beaufays et al. [4] achieved ERR 40% with 1600 names, compared to a baseline letter-to-phone pronunciation engine. We obtained similar ERR with a much larger grammar size (42.13% with 13000 names) with a much better baseline. Compared with Reveil et al. [5], whose ERR was close to 40% with 3540 names spoken by speakers from 5 different language origins, our dataset may not have such diversity but we achieved much higher ERR of around 58% for a similar grammar size.

Grammar Size () 1000 3000 5000 7000 9000 11000 13000
phase 1 base () 8.54 12.39 14.73 16.49 18.19 19.29 20.44
phase 2 base () 5.86 8.52 10.39 11.63 12.57 13.35 14.10
phase 2 learn () 2.10 3.47 4.56 5.78 6.39 7.38 8.16
phase 2 ERR 64.16 59.27 56.11 50.30 49.16 44.72 42.13
Table 5: Name Error Rate (NERs) and Error Reduction Rate (ERR)
Figure 5: NERs before and after pronunciation learning

This pronunciation learning algorithm is an essential complement to a) STP/G2P interpreters and b) content-adapted pronunciation dictionaries made by linguists. When these two are not sophisticated enough to cover pronunciation variations, it can help to build a better lexicon, and ASR accuracy can significantly be improved, especially when the grammar size is not too large. As indicated in Tab. 5, ERR of phase 2 tends to decrease when the grammar size increases, since there is not much room for learning when one name is surrounded by many other names with similar pronunciations. Similarly, the learned dictionary is also dependent on grammar size, i.e., one dictionary learned from a small database might not be a good fit for a much larger database, since it may be too aggressive in learning, while a larger database requires a more conservative approach to learn. In the future, learned pronunciations can be used to improve the STP interpreter by generating alternative spelling-to-pronunciation interpretation rules, so it can automatically output alternative pronunciations covering new names, and provide a baseline that is good enough even without learning.

References

  • [1] Y. Gao, B. Ramabhadran, J. Chen, M. Picheny et al., “Innovative approaches for large vocabulary name recognition,” in ICASSP 2001, vol. 1.    IEEE, 2001, pp. 53–56.
  • [2] I. Badr, “Pronunciation learning for automatic speech recognition,” Ph.D. dissertation, Massachusetts Institute of Technology, 2011.
  • [3] H. Y. Chan and R. Rosenfeld, “Discriminative pronunciation learning for speech recognition for resource scarce languages,” in Proceedings of the 2nd ACM Symposium on Computing for Development.    ACM, 2012, p. 12.
  • [4] F. Beaufays, A. Sankar, S. Williams, and M. Weintraub, “Learning name pronunciations in automatic speech recognition systems,” in Tools with Artificial Intelligence 2003. Proceedings. 15th IEEE International Conference, 2003, pp. 233–240.
  • [5] B. Réveil, J. Martens, and H. Van Den Heuvel, “Improving proper name recognition by means of automatically learned pronunciation variants,” Speech Communication, vol. 54, no. 3, pp. 321–340, 2012.
  • [6] Z. Ge, “Mispronunciation detection for language learning and speech recognition adaptation,” Ph.D. Dissertation, Purdue University West Lafayette, 2013.
  • [7] “Speech recognition grammar specification version 1.0,” http://www.w3.org/TR/speech-grammar/, accessed 2015-07-01.
  • [8] R. Doe. (2013) CMU pronouncing dictionary @ONLINE. [Online]. Available: http://www.speech.cs.cmu.edu/cgi-bin/cmudict
  • [9] W. J. Heeringa, “Measuring dialect pronunciation differences using levenshtein distance,” Ph.D. dissertation, Citeseer, 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
207461
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description