Unsupervised Low Latency Speech Enhancement with RT-GCC-NMF
In this paper, we present RT-GCC-NMF: a real-time (RT), two-channel blind speech enhancement algorithm that combines the non-negative matrix factorization (NMF) dictionary learning algorithm with the generalized cross-correlation (GCC) spatial localization method. Using a pre-learned universal NMF dictionary, RT-GCC-NMF operates in a frame-by-frame fashion by associating individual dictionary atoms to target speech or background interference based on their estimated time-delay of arrivals (TDOA). We evaluate RT-GCC-NMF on two-channel mixtures of speech and real-world noise from the Signal Separation and Evaluation Campaign (SiSEC). We demonstrate that this approach generalizes to new speakers, acoustic environments, and recording setups from very little training data, and outperforms all but one of the algorithms from the SiSEC challenge in terms of overall Perceptual Evaluation methods for Audio Source Separation (PEASS) scores and compares favourably to the ideal binary mask baseline. Over a wide range of input SNRs, we show that this approach simultaneously improves the PEASS and signal to noise ratio (SNR)-based Blind Source Separation (BSS) Eval objective quality metrics as well as the short-time objective intelligibility (STOI) and extended STOI (ESTOI) objective speech intelligibility metrics. A flexible, soft masking function in the space of NMF activation coefficients offers real-time control of the trade-off between interference suppression and target speaker fidelity. Finally, we use an asymmetric short-time Fourier transform (STFT) to reduce the inherent algorithmic latency of RT-GCC-NMF from 64 ms to 2 ms with no loss in performance. We demonstrate that latencies within the tolerable range for hearing aids are possible on current hardware platforms.
Real-world speech processing applications including assistive listening devices and digital personal assistants rely on online speech enhancement algorithms to suppress noise and interfering sound sources.
However, a significant amount of research has focused on the offline setting with many algorithms being unsuitable for real-time use due to batch processing or computational requirements.
Recent speech enhancement and source separation approaches based on deep neural networks offer impressive performance gains compared with traditional real-time signal processing methods [1, 2, 3, 4], however these methods tend to be computationally demanding, preventing their use in low-power devices, and often rely on future information, preventing their use in real-time systems.
Deep learning methods also require a significant amount of supervised data for training, thus preventing their use in data-poor domains.
The offline GCC-NMF source separation algorithm , on the other hand, is an unsupervised approach that performs feature learning on the mixture signal itself, thus forgoing the need for large amounts of supervised training data.
GCC-NMF combines unsupervised machine learning via non-negative matrix factorization (NMF)  with the generalized cross-correlation (GCC) spatial localization method rooted in signal processing .
However, the NMF dictionary, its activation coefficients, and the target speaker’s time difference of arrival (TDOA) are all estimated using entire noisy utterances (10 seconds in duration), thus precluding its use in real-time.
NMF learns parts-based representations from non-negative data . For audio signals, NMF is typically applied to magnitude spectrogram representations, learning spectral or spectro-temporal atoms that capture patterns typical of sound sources . In the context of speech enhancement, we must then determine which atoms belong to the target speaker and which belong to interference. Supervised model-based approaches solve this problem by pre-learning dictionaries for each source in isolation [9, 10], allowing for real-time operation as only the current (and possibly previous) spectrogram frames are required at runtime . Unsupervised model-based approaches leverage the spatial distribution of the underlying sources to learn individual source dictionaries with no prior information in the form of separate datasets for speech and noise . These unsupervised approaches are unable to operate in real-time as the spatial information does not generalize to unseen settings.
In this work, we present RT-GCC-NMF
The STFT underlying most speech enhancement techniques based on deep neural networks or NMF, including the approach we present here, brings trade-off between spectral resolution and the inherent delay between the system’s input and output. This algorithmic latency is independent of processing speed, and is a consequence of the temporal windowing underlying the STFT. Since many algorithms rely on high spectral resolution, algorithmic latencies greater than 64 ms are common. For assistive listening devices including hearing aids, however, such high latencies lead to the perception of objectionable echoes as a superposition of both the aided and unaided sounds are heard by the listener . Depending on the type and severity of hearing loss, delays below 15 to 32 ms are likely required to be tolerable [19, 20], with delays less than 10 ms being a reasonable objective in the general case [21, 22]. To address this problem, an asymmetric STFT windowing approach proposed by Mauler and Martin  is combined with RT-GCC-NMF, simultaneously providing high spectral resolution and latencies well below the 10 ms target.
The contributions of this paper are organized as follows. In Section II, we review GCC-NMF  and propose a flexible soft masking function in the space of NMF activation coefficients. In Section V-B3, we show that this mask provides frame-by-frame control of the trade-off between target fidelity and interference suppression. In Section III, we develop RT-GCC-NMF, pre-learning an NMF dictionary in an unsupervised fashion on a different dataset than used at test time. In Section V-B1, we show that this approach generalizes to unseen speakers, acoustic conditions, and recording setups, from a very small amount of unlabelled training data. In Section IV, we combine RT-GCC-NMF with an asymmetric STFT windowing approach to drastically reduce its inherent algorithmic latency. In section V-C2, we show that we may reduce algorithmic latency from 80 ms to as low as 2 ms with no loss in speech enhancement performance, therefore achieving algorithmic latencies well within the tolerable range for hearing assistive devices. In Section V, we study the effects of the RT-GCC-NMF system parameters and compare it with other approaches from the SiSEC challenge . In Section VI, we present an open source implementation of RT-GCC-NMF. We evaluate the computational performance on a wide variety of hardware platforms and show that latencies as low as 6 ms are possible in practice.
Ii Offline GCC-NMF
Ii-a GCC: Generalized cross-correlation
GCC is a robust approach to sound source localization in the presence of noise, interference, and reverberation [7, 25]. The GCC function extends the frequency domain cross-correlation definition with an arbitrary frequency-weighting function , providing control over the relative importance of the signal’s constituent frequencies when computing the cross-correlation:
where and are the left and right complex-valued time-frequency transforms computed with the STFT, is complex conjugation, and , , and index frequency, time, and TDOA respectively.
Many of the most robust localization methods are based on the GCC phase transform (GCC-PHAT) [26, 27, 28], in which frequencies are weighted equally by defining as the reciprocal of the product of the magnitude spectrograms, i.e. , such that:
The resulting GCC-PHAT angular spectrogram can then be pooled over time, with the TDOA of the highest peaks corresponding to the source location estimates; see Figure 1b) for an example.
Ii-B NMF: Non-negative matrix factorization
When applying NMF to audio signals, input typically consists of a magnitude spectrogram , with and indexing frequency and time as above.
NMF decomposes the spectrogram into two non-negative matrices: a dictionary whose columns comprise atomic spectra indexed by and set of corresponding activation coefficients such that ; see Figure 1c) for example NMF dictionary atoms.
Each column of the input spectrogram , i.e. each frame , is thus approximated as a linear combination of the NMF dictionary atoms with the activation coefficients from the corresponding column of .
For the stereo spectrograms we study here, we concatenate the left and right input spectrograms along the time axis, , i.e. for left and right spectrograms each with size F x T, the concatenated matrix has size F x 2T.
In this way, the resulting NMF dictionary atoms capture only spectral information as before, with differences between the left and right channels captured in the corresponding activation coefficient matrices, .
In traditional NMF, dictionary learning and activation coefficient inference are performed concurrently by initializing the dictionary and activation coefficient matrices randomly, and updating them iteratively according to multiplicative update rules. The update rules converge to a local minimum of the beta divergence reconstruction cost function, a special case of which is the generalized Kullback-Leibler (KL) divergence  defined as,
where is the reconstructed input matrix . The update rules for the KL divergence cost function are then ,
where the matrix exponentials, divisions, and Hadamard product are computed element-wise, and is the all-ones matrix.
The NMF dictionary atoms are typically normalized after each update, and their activation coefficients scaled accordingly.
Since all time frames (all columns of ) are required prior to optimization, standard NMF is an offline approach. For RT-GCC-NMF, as described in Section III-B, we will instead pre-learn the NMF dictionary and infer its activation coefficients online on a frame-by-frame basis by initializing the activation coefficient vector randomly and iteratively performing (5) while keeping the dictionary fixed.
Given the arbitrary frequency-weighting function in the definition of GCC, and the fact that individual NMF dictionary atoms are themselves non-negative functions of frequency, we may construct a set of atom-specific GCC frequency weighting functions,
such that for a given atom , frequencies are weighted according to their relative magnitude in the atom. The resulting atom-specific GCC-NMF angular spectrograms are then defined as follows, with examples shown in Figure 1d),
These GCC-NMF angular spectrograms then allow us to estimate the TDOA of each atom at each time , defined as the for which GCC-NMF reaches its maximum value, i.e. . We then associate individual atoms with the target if their estimated TDOA lies within a window of size around the target TDOA as estimated by GCC-PHAT, otherwise they are associated with interference. This procedure defines a binary activation coefficient mask,
where the effect of the window width will be studied in Section V in the context of the soft-masking alternative presented below. Multiplying the mask with the activation coefficients element-wise and reconstructing as usual then yields a primary estimate of the target magnitude spectrogram,
We note that this mask eliminates atoms attributed to the interference, thus isolating the target speech from the mixture. As is typical in NMF-based separation, the complex target spectrogram is then estimated by applying a time-varying Wiener-like filter to the input signal. This filter is constructed in the frequency domain as the ratio between the target and mixture estimate spectrograms , i.e. the reconstructed estimate of the magnitude input spectrogram . The filter is then multiplied with the complex input spectrogram ,
where is the complex target spectrogram estimate and is the channel index. The complex target spectrogram estimate is then transformed to the time domain with the inverse STFT as described in Section IV-A.
Ii-D Soft masking GCC-NMF
Soft-mask alternatives to binary masking in the time-frequency domain is a common technique to improve speech enhancement performance [30, 31]. In this section, we propose a soft-mask alternative to the binary activation coefficient mask we reviewed above. This soft NMF activation coefficient masking function is defined as,
where is the atom index, is the target TDOA, controls the window width, controls the window shape, and defines the window floor, i.e. its minimum value. This soft mask allows atoms to be attenuated in a continuous fashion based on the distance between their estimated TDOA and the target TDOA. In Section V-B3, we will study the effect of the masking function parameters of the binary and soft activation coefficient masks on objective speech enhancement quality and speech intelligibility measures. We will show that the parameters may be used to control the trade-off between interference suppression and target fidelity, as well as the trade-off between speech quality and speech intelligibility. For the other experiments presented in Section V, we set the masking parameters to = 3/16, = 0, and , for which the soft masking function reduces to the binary masking function. Since the resulting mask is applied independently for each time frame, the parameters may be modified in real-time based on the user’s needs.
As the GCC-NMF masking functions defined in the previous section are constructed independently for each frame, GCC-NMF has potential to be performed online in a frame-by-frame fashion. However, dictionary learning, activation coefficient inference, and target localization are performed using the entire mixture signal, thus precluding online use. We proceed to address each of these elements in this section, as we develop the real-time RT-GCC-NMF.
Iii-a Dictionary pre-learning
A typical approach for supervised speech enhancement with NMF is to pre-learn a pair of NMF dictionaries: one using isolated speech and one using isolated noise.
For a given test signal, the activation coefficients of both dictionaries are then inferred while keeping the dictionaries fixed [32, 33].
We adapt this approach to the unsupervised setting here by pre-learning a single NMF dictionary from a dataset containing both isolated speech and noise signals.
As described in Section V-A, individual STFT frames are chosen at random from the isolated speech and isolated noise signals from the CHiME dataset.
These frames are concatenated along the time axis to construct an NMF input matrix that is used as input to a standard NMF decomposition, i.e. using equations (5) and (6).
We keep the resulting NMF dictionary , comprising elements of both speech and noise, and the resulting activation coefficients are discarded.
Contrary to the supervised approach, this approach remains purely unsupervised as a single dictionary is learned for both speech and noise using no prior knowledge. As the single pre-learned NMF dictionary contains features of both speech and noise signals, individual NMF dictionary atoms are then associated with the target speaker or interference at each point in time according to (9) or (12). This approach allows individual NMF dictionary atoms to encode either speech or noise at different points in time, thus overcoming the limitation in the supervised case where a single dictionary atom may only encode a single source. In Section V-B1, we show that we may achieve comparable performance to offline GCC-NMF by pre-learning the NMF dictionary using one dataset and testing on a completely different dataset with different speakers, acoustic environments, and recording setups. The dictionary pre-learning approach is therefore able to generalize across these conditions, avoiding the well-known mismatch problem with NMF-based speech enhancement when the training and testing data originate from different datasets .
Iii-B Activation coefficient inference
The activation coefficients of the pre-learned dictionary can be inferred for the input mixture on a frame-by-frame basis by initializing the activation coefficient vector randomly, and updating it iteratively according to (5). We note that since the estimated target is and the estimated interference is , the estimated mixture is (the sum of target plus interference). Inference of the mixture coefficients is therefore performed independent of the coefficient mask estimation. The coefficient mask then attenuates atoms that are attributed to noise based on their TDOA estimates. We will see in Section V-B2 that better performance can in fact be achieved by forgoing activation coefficient inference altogether. In this case, we may replace the activation coefficients with all-ones, thus simplifying the Wiener-like filtering process defined in (11) as follows,
making use of the definition of from Eq. (10) and as defined in text thereafter. An interesting consequence of this simplification is that the resulting filter no longer relies on the input magnitudes that were used to infer the NMF activation coefficients . Depending only on the pre-learned dictionary and phase differences between the left and right channels, this simplification results in a purely phase-based variant of RT-GCC-NMF. As well, since a single mask is used for both channels, binaural cues remain unaffected by the filtering process.
Iii-C Online localization
With offline GCC-NMF, target localization was performed using a max-pooled GCC-PHAT technique  where the target TDOA is that at which the global maximum occurred in the GCC-PHAT angular spectrogram defined in (3), i.e. . In the online setting, however, we only have access to present and past information and the target localization method must consequently be adapted. We explore two online localization approaches here, for both static and moving speakers. In Section V, we consider the static speaker scenario. In this case, we take the current and all previous angular spectrogram frames into account in taking the argmax, i.e. for . In Section VI, we extend this approach to the moving speaker case with a real-time open source demonstration. In this case, we use a sliding window approach where the argmax is taken over the recent history of the angular spectrogram, i.e. for , where is the sliding window size. The effect of the window size may be explored interactively in real-time, where smaller window sizes track faster changes in source position but may switch to background noise during short pauses in the speech, while larger window sizes result in a more stable tracking for more slowly moving speakers.
Iv Low latency RT-GCC-NMF
Speech enhancement algorithms built around the STFT incur an inherent algorithmic latency, independent of processing speed, equal to the window size plus the hop size (frame advance). Given the trade-off between spectral resolution and window size, algorithms including RT-GCC-NMF that rely on high spectral resolution often have latencies greater than 64 ms. However, such high latencies are not tolerable for a number of real-world applications of speech enhancement including assistive listening devices. In this section, we combine the asymmetric STFT windowing approach proposed by Mauler and Martin  with the RT-GCC-NMF speech enhancement system, thus simultaneously providing high spectral resolution and latencies well below the 10 ms target for hearing aids.
Iv-a STFT and latency
The STFT processes sound in frames, i.e. short overlapping segments of time, where each frame is multiplied by an analysis window prior to computing its Fourier transform. Resynthesis is achieved by taking the inverse Fourier transform of the transformed frame, multiplying the resulting samples by a synthesis window, and combining neighbouring frames via the overlap-add (OLA) method . Perfect reconstruction can be achieved if the transform has the constant overlap-add (COLA) property, i.e. if the overlapped sum of the element-wise product of the analysis and synthesis windows is constant over time. A commonly-used window for analysis and synthesis is the pointwise square root of the periodic Hann window , where the periodic Hann function is defined for frame size as,
The above process of overlapped signal windowing with OLA resynthesis induces a latency equal to the window size .
To run in real-time, all processing including the Fourier transform and its inverse, should occur within a single frame advance , resulting in a total system latency of .
Running RT-GCC-NMF on input signals sampled at 16 kHz, with a window size of 1024 samples and 256 sample frame advance, for example, results in a total system latency of 80 ms.
A first approach to reduce the RT-GCC-NMF system latency is to simply reduce the window size . This comes at the expense of decreasing the spectral resolution, however, and as we will show in Section V-C2, objective speech enhancement quality and intelligibility measures decrease significantly for small window sizes with this approach. We therefore propose an alternative approach to latency reduction based on an asymmetric STFT windowing method.
Iv-B Asymmetric STFT windowing
Departing from the tradition of symmetric analysis and synthesis windows that have the same duration, asymmetric windowing allows us to simultaneously achieve high spectral resolution and low latency by combining long analysis windows with short synthesis windows.
The asymmetric windows we use in this work have been adapted from , though other asymmetric windowing approaches can be found in the literature [37, 38, 39, 40].
For a given frame size , the asymmetric analysis and synthesis windows are designed such that their product is a Hann window of size . This Hann window shares its right edge with the underlying frame, and can be made to be much shorter than the frame itself by choosing , as depicted in Figure 3. The analysis window and the synthesis window are defined mathematically as,
These window functions are constructed in two parts with respect to the center of the analysis-synthesis product Hann window, i.e. .
To the right of , both analysis and synthesis windows consist of the right half of a square root Hann window of size .
To the left, the analysis window consists of the left half of a Hann window of size , while the synthesis window is defined as the ratio of the analysis window and the product Hann window, limited to the range .
In Section V-C, we will show that this asymmetric windowing strategy allows us to drastically reduce the algorithmic latency of RT-GCC-NMF without affecting objective speech enhancement quality and intelligibility metrics. For example, running RT-GCC-NMF on 16kHz input signal with an analysis window of 1024 samples and a synthesis window of 32 samples with a 16 sample frame advance, we may reduce the total system latency to 3 ms. We note that retaining the relative synthesis window overlap while decreasing the synthesis window size results in an increase in the number of windows required for the overlap-add process, and thus increases the computational load. We present an empirical analysis of the computational requirements for varying algorithmic latencies in Section V-C3. Finally, we note that the asymmetric windowing approach increases the start-up latency of the STFT, though this may be mitigated by pre-padding the signal with zeros.
V-a Experimental setup
In this section, we evaluate the RT-GCC-NMF algorithm on the SiSEC 2016 speech in noise dev dataset, consisting of two-channel mixtures of speech and real-world background noise, with microphones separated by 8.6 cm .
Unsupervised dictionary pre-learning is performed on a small subset of the CHiME 2016 development set , with randomly selected frames equally divided between isolated speech and background noise signals of a single microphone.
The sample rate for both SiSEC and CHiME is 16 kHz, and we use an STFT with 1024-sample windows (64 ms), a 256-sample hop size (16 ms), and a square root Hann analysis and synthesis window functions for the symmetric windowing case.
Default RT-GCC-NMF parameters are set to dictionary size = 1024, number of NMF dictionary pre-learning updates = 100, number of NMF activation coefficient inference updates at runtime = 100, number of TDOA samples = 128, and target TDOA window size 3/64 of the total range, i.e. 6 TDOA samples.
Speech enhancement quality is quantified using the Perceptual Evaluation methods for Audio Source Separation (PEASS) toolkit , and the BSS Eval “toolbox for performance measurement in (blind) source separation” . PEASS is a perceptually-motivated method that better correlates with subjective assessments than the traditional SNR-based metrics provided by BSS Eval . These open source toolkits both provide measures of overall enhancement quality, target fidelity, interference suppression, and lack of perceptual artifacts, where higher scores are better in all cases. The overall, target-related, interference-related, and artifact-related scores are named OPS, TPS, IPS, and APS respectively in the case of PEASS, and SDR, ISR, SIR, and SAR in the case of BSS Eval. Speech intelligibility is quantified with the short-time objective intelligibility (STOI)  and the extended STOI (ESTOI)  measures, where ESTOI has been shown to better correlate with listening test scores than STOI .
V-B RT-GCC-NMF experiments
We begin by studying the effects on objective speech enhancement quality and speech intelligibility metrics for RT-GCC-NMF. We first study the effects of the pre-learned dictionary size and the amount of data used for pre-learning, followed by the number of training and inference iterations, the RT-GCC-NMF target TDOA masking function parameters, and the input SNR. These evaluations are performed with offline target TDOA estimation using the max-pooled GCC-PHAT approach. We then compare the offline localization approach with the simple accumulated online localization method, and compare the results with other speech enhancement algorithms from the SiSEC challenge and the ideal binary mask oracle baseline.
Speech enhancement scores for varying dictionary size and number of frames used for dictionary pre-learning are shown in Figure 5 a) and b) respectively, with default dictionary size of 1024, and default train set size of 2048 frames of 64 ms each, divided equally between speech and noise. Overall PEASS and intelligibility scores converge quickly with increasing train set size, such that performance is near maximal for most measures with only 1024 training frames, while SNR-based scores exhibit more variability across train set sizes. Contrary to many supervised approaches, therefore, the unsupervised dictionary pre-learning we present here requires a very small amount of training data. For the NMF dictionary size, we note a monotonic increase in overall PEASS and intelligibility scores with diminishing returns. While the SNR scores again exhibit more variability, they generally increase with dictionary size as well. We observed a similar trend with offline GCC-NMF for increasing dictionary size, as well as similar overall PEASS and SNR scores on the same dataset . The dictionary pre-learning technique therefore generalizes to new speakers, noise and acoustic conditions, and recording setups. In Section V-B5, we study the capacity for generalization further by comparing sources used for dictionary learning in more detail.
|1024 atoms||16384 atoms|
|Dictionary||Coefficients||Localization||PEASS||BSS Eval||STOI||ESTOI||PEASS||BSS Eval||STOI||ESTOI|
NMF training and inference updates
The effect of the number of NMF dictionary pre-learning updates on enhancement performance is presented in Figure 5 c).
As was the case for offline GCC-NMF, increasing the number of training iterations results in increased interference suppression, with PEASS target fidelity and lack of artifact scores decreasing for higher number of iterations, while the overall PEASS and intelligibility scores remain stable for values greater than 50 updates.
The choice of the number of training iterations therefore offers offline control of the trade-off between target fidelity and interference suppression.
One could learn a set of dictionaries spanning a range of training iterations, and subsequently control the trade-off online by selecting the desired dictionary on a frame-by-frame basis.
In Figure 5 d), we present the effect of the number of online NMF inference iterations, i.e. the number of times equation (5) is called for a given input frame, after random initialization of the NMF activation coefficients . We note similar convergence effects to the number of training iterations for large numbers of updates. For small numbers of updates, however, we note an opposite effect for both overall PEASS and intelligibility scores, as they both increase with decreasing iterations. The best overall PEASS and intelligibility scores is in fact achieved when no inference is performed, i.e. when random activation coefficients are used. As mentioned in Section III-B, we can thus forego the activation coefficient inference stage completely, and perform the Wiener-like filtering using only the pre-learned dictionary and input phase differences as in equation (13). These results are presented as well, i.e. using all-ones activation coefficients indicated with inference updates = -1, showing slightly better results than for random activation coefficients for all metrics. For subsequent experiments, we will therefore focus on this simplified phase-based RT-GCC-NMF formulation using all-ones activation coefficients.
Finally, we note that both the number of training and inference iterations offer control over the target fidelity vs. interference suppression trade-off. While the dictionary pre-learning is performed offline, and thus has no computational effect online, increasing the number of inference iterations comes with additional computational cost at runtime.
RT-GCC-NMF mask parameter experiments
In Figure 6, we present the effects on objective speech enhancement quality and intelligibility scores of the RT-GCC-NMF soft-masking function parameters: the TDOA window size , the window floor , and the shape parameter from equation (12).
Default settings for these parameters are = 3/16, = 0, and .
We first note that the TDOA window size has a drastic effect on the target fidelity vs. interference suppression trade-off, where widening the TDOA window results in reduced interference suppression and higher target fidelity.
Since the target TDOA window width can be controlled online, this provides the most significant control of the trade-off with respect to the parameters presented so far, with no effect on computational requirements.
The highest overall quality scores are achieved for small windows near 1/16 of the TDOA range, while the highest intelligibility scores are achieved for wider windows between 1/4 and 3/8.
Effects of the two remaining masking parameters, the window floor and window shape , are shown in Figure 6 b) and c) respectively. Both overall PEASS and BSS Eval scores reach their maxima for and , i.e. for zero window floor and boxcar shaped function, thus reducing to a binary activation coefficient mask. However, we note peaks in the intelligibility scores for small window floor values near , and window shapes near , suggesting that while soft-masking may not bring improvement in terms of speech quality, it can improve speech intelligibility.
Effect of input SNR
To study the effect of input SNR on speech enhancement performance, we recreate all examples from the SiSEC dev dataset at SNRs varying from -40 to 40 dB by rescaling the target speech and noise signals prior to mixing to achieve the desired SNR. Absolute overall quality and intelligibility measures are presented in Figure 7 a), and relative improvement with respect to the input mixture are shown in Figure 7 b). Since we found previously that the overall quality and intelligibility scores reach their peaks at different values of the TDOA window width (Section V-B3), we present results here for both a narrow window width, favouring overall enhancement quality metrics, and a wide window width, favouring intelligibility measures. The wide windows do indeed result in better intelligibility scores over all input SNRs, with improvement in intelligibility scores occurring for input SNRs between -30 and 20 dB. The narrow window results in significant improvement in SNR for negative input SNRs, with worse reduction in quality for positive input SNRs. In both cases, the SNR improvement increases monotonically with decreasing input SNR, suggesting that RT-GCC-NMF offers the greatest improvements in the most challenging conditions in terms of SNR. PEASS scores, on the other hand, suggest that the improvement in speech quality peaks for moderate noise levels, with decreasing improvement for lower and higher input SNRs, with the largest improvement between -10 and 10 dB. Intelligibility scores also suggest the biggest improvement for input SNRs near 0 dB, bringing improvement from -20 to 10 dB.
|Magoarou [49, 50]||32.667.51||66.1022.10||34.8412.71||44.1311.87||3.675.69||17.795.46||5.626.36||8.774.17|
|Wang [51, 52]||38.015.79||53.919.25||54.506.12||50.607.02||9.843.09||13.544.88||19.983.29||11.982.76|
Comparison between approaches
In Table I, we compare the performance of several elements of RT-GCC-NMF including dictionary pre-learning vs. mixture learning, activation coefficient inference vs. all-ones coefficients, moderate vs. large NMF dictionaries, and offline vs. online localization, for the static speaker case.
We first note that replacing activation coefficients with the all-ones vector, as described in Section III-B, results in a substantial increase in performance across PEASS and intelligibility scores in all cases, despite decreasing the SNR-based BSS Eval score.
Performing target localization online, as described in Section III-C, results in somewhat decreased in performance in almost all cases.
A more robust approach to target localization than the simple accumulated GCC-PHAT is therefore likely necessary in practice.
Pre-learning the NMF dictionary results in similar performance to the previously presented offline mixture-learned approach.
As the dictionary is pre-learned using a different dataset (CHiME) than used for testing (SiSEC), this approach generalizes to unseen speakers, acoustic conditions, and recording setups.
Finally, we note that the all-ones activation coefficients approach also results in increased performance for the offline mixture-trained GCC-NMF approach we had presented previously , exhibiting significantly improved performance in terms of PEASS and intelligibility measures.
Offline GCC-NMF can therefore also benefit from this finding.
To further study the ability of the proposed RT-GCC-NMF to generalize across datasets, we proceed to determine whether its performance is affected when the inference dataset is different from the dataset used to train the NMF dictionary. The SiSEC dev set is first split into train and evaluation subsets where inference is performed on the second half of each example, while the first half is used for training. This split guarantees that the same speakers and noise types are present in both train and evaluation subsets, with different instances occurring in both. The within-corpus dictionary is then trained using the first half of the SiSEC dataset, while the cross-corups dictionary is trained on CHiME. In Table II, we present results for both the standard dictionary training method as well as the copy-to-train dictionary construction method presented in , where randomly-selected frames from the training data are used as the dictionary atoms. We first note that there is no significant difference in performance between the within-corpus dictionary and the cross-corpus dictionary, for both dictionary construction methods. Second, we note improved performance when using train method over the copy method for both the within-corpus and cross-corpus dictionaries. This may be due to NMF learning a more fine-grained parts-based representation, with the resulting mask constructed by associating more low-level parts instead of complete spectra as with the copy-to-train approach.
In Table III, we compare RT-GCC-NMF with other algorithms from the 2013, 2015, and 2016 SiSEC separation challenges [55, 53, 24]. We use the SiSEC dev set for comparison here as the test set was evaluated by the challenge organizers, and isolated speech and noise signals necessary for evaluation are not publicly available.We therefore use the default parameters defined in Section V-A and do not optimize the parameters using the analysis presented in Section V-B. We present three variants in Table III: offline GCC-NMF with mixture-learned dictionary, and pre-learned dictionary RT-GCC-NMF with and without online localization, each using the phase-based, all-ones activation coefficient technique. For the offline GCC-NMF, the NMF dictionary is trained directly on the SiSEC mixture signals, while for the RT-GCC-NMF, the dictionary is trained using isolated speech and noise signals from the CHiME dataset as described in Section V-A. All approaches outperform all but one of the previous methods, most of which rely on supervised learning or are unsuitable in online settings. The method that outperforms RT-GCC-NMF is a frequency-domain blind source separation technique using a region growing permutation alignment approach . While the authors show that it has the potential to run in real-time, a real-time implementation has not yet been presented to the best of our knowledge. These results demonstrate that RT-GCC-NMF holds significant potential for future research and applications, especially given that it remains purely unsupervised, conceptually simple, easy to implement, and generalizes across speakers, acoustic environments, and recording setups.
V-C Low latency RT-GCC-NMF experiments
In this section, we evaluate the low latency version of RT-GCC-NMF. We begin by comparing the effect of latency reduction of the symmetric and asymmetric windowing methods on the learned NMF dictionary atoms, followed by the effect of both approaches on the objective speech enhancement quality and intelligibility measures. We then study the empirical processing time requirements of low latency RT-GCC-NMF for a variety of hardware platforms to determine the conditions under which the proposed system may run in real-time on currently available hardware platforms.
Asymmetric windowing and NMF dictionary atoms
In Figure 8 a), we depict example NMF dictionary atoms learned using the symmetric STFT windowing method, for varying algorithmic latencies. We note that as the window size is decreased, atoms become increasingly wideband, and the spectral details captured with longer duration windows are lost. Contrary to the traditional windowing approach, asymmetric windowing allows us to retain the long-duration analysis windows while decreasing the synthesis window size. As the synthesis window size is reduced, the analysis window size remains fixed at the frame size , with its shape increasingly weighted towards the future. Example NMF dictionary atoms learned using the asymmetric windowing approach with varying algorithmic latencies are shown in Figure 8 c). We note that the learned NMF dictionary atoms retain spectral detail independent of synthesis window size. Since identical training data and random seed is used in all cases, the resulting atoms remain very similar across all algorithmic latencies, with subtle differences in the learned NMF dictionary atoms resulting from the different analysis window shapes.
Asymmetric windows and speech enhancement quality
In Figure 9 a), we present the objective speech enhancement quality and intelligibility measures as a function of algorithmic latency for the symmetric windowing case.
We note that both the overall quality scores as well as intelligibility scores decrease with decreasing window size, with a significant drop in PEASS overall performance for window sizes less than 8 ms.
This is likely due to a decreased separability of speech and noise sources with the wideband NMF dictionary atoms shown in Figure 8 a), resulting in decreased quality of the resulting RT-GCC-NMF speech enhancement.
We also note a significant trade-off between interference suppression and both target fidelity and artifact PEASS scores, where smaller window sizes result in increased interference suppression at the cost of significant artifacts and poorer target fidelity.
In Figure 9 b), we present the effect of latency for the asymmetric windowing approach in the same conditions as above. The analysis window here is kept fixed at 1024 samples at 16 kHz (64 ms), while the synthesis window size is varied from 512 to 32 samples (32 to 2 ms), with an overlap of 75% of the synthesis window used in each case. We note that all scores remain relatively constant for varying synthesis window size, even for latencies as low as 2 ms. These results demonstrate that the proposed asymmetric windowing approach is a viable solution to reduce the latency of RT-GCC-NMF to values well below the threshold required for hearing devices, while maintaining the enhancement quality of the higher latency symmetric windowing approach.
|Type||Cores||Clock Speed||RAM||Type||Cores||RAM||Theoretical Max.|
|Raspberry Pi 3||ARM A53||4||1.2 GHz||1 GB||-||-||-||4 W|
|Jetson TX1 GPU||ARM A57||4||1.9 GHz||4 GB||NVIDIA Maxwell||256||4 GB||15 W|
|MacBook Pro||Intel Core 2 Duo||2||2.4 GHz||4 GB||-||-||-||55 W|
|Xeon, Tesla||Intel Xeon E5-1620v3||4||3.5 GHz||16 GB||NVIDIA Tesla K40C||2880||12 GB||275, 500 W|
Latency and RT-GCC-NMF processing time
We now proceed to study the computational requirements of the RT-GCC-NMF speech enhancement algorithm with asymmetric windowing to determine the conditions under which it may be run in real-time.
As we saw in Section IV-B, the latency of the asymmetric STFT process in a real-time system is equal to the duration of the synthesis window plus the frame advance.
For speech enhancement to be performed in real-time, the system must then process a single frame within the time of a single frame advance.
This processing time includes the windowing processes, the forward FFT, the RT-GCC-NMF speech enhancement processing itself, the inverse FFT, and the OLA summation.
To evaluate processing time empirically, we use the RT-GCC-NMF implementation written in Python with the Theano optimizing compiler  as described in Section VI.
In Figure 10 a), we present the average measured processing time of a single frame, as a function of the NMF dictionary size, for a variety of hardware platforms (see Table IV for specifications). We note that the processing time increases approximately linearly with dictionary size, with the slope varying between hardware platforms. On all systems presented, processing times less than 8 ms are possible, provided a small enough dictionary is used. Since speech enhancement performance decreases smoothly with decreasing dictionary size as we showed in Section V-B1, Figure 5 a), RT-GCC-NMF can be easily adapted to a range of hardware platforms, with performance determined by available computational power. Finally, we note that since the asymmetric windowing approach only affects the shape of the analysis and synthesis windows, it has no effect on the computational demands of the algorithm for a single frame. Exactly the same operations are performed in both cases, where only the sampled values of the window functions differ. However, asymmetric windowing does result in more overall computation as more frames need to be processed due to the decreased frame advance required for short synthesis windows.
In Figure 10 b), we depict the relationship between the inherent STFT latency and available processing time for a single frame, as a function of synthesis window size and overlap.
The available processing time is determined analytically for a given window size and overlap, and does not reflect empirical, measured values.
Instead, this Figure depicts the maximum amount of time that a given hardware device can take to process a single frame in order to operate in real-time.
We note that decreasing either the synthesis window size or the frame advance decreases the system latency at a cost of decreased available processing time.
We may combine this information with Figure 10 a) to determine, for a given hardware platform and dictionary size, the available synthesis window size and overlap values (and resulting latencies), for the system to run in real-time.
Experiments showed that enhancement performance was unaffected by window overlap for values greater than 75%, with a small decrease in performance with an overlap of 50%. Focusing on the fastest possible performance here, we consider a window overlap of 50% in the following. All systems prove fast enough for a synthesis window size of 16 ms with 50% overlap and a dictionary size of 64, resulting in a latency of 24 ms. All systems except the Raspberry Pi may achieve 12 ms latency for small to moderate dictionary sizes, with a window size of 8 ms and 50% overlap. The fastest system (Tesla K40 GPU) can achieve 6 ms latency for dictionaries at least as large as 1024 atoms. It is therefore possible to achieve latencies suitable for hearing assistive devices on modern embedded and desktop computing platforms using speech enhancement with RT-GCC-NMF with the asymmetric windowing latency reduction technique. Future work will involve additional implementation optimizations in an effort to run RT-GCC-NMF on lower-power devices including digital signal processors (DSP) that are better-suited to wearable real-world use for hearing assistive applications than the platforms presented here.
Vi Real-time implementation
RT-GCC-NMF was written in Python, using the Theano optimizing compiler , with an interactive interface using PyQt  and pyqtgraph . The graphical user interface allows users to manipulate both NMF and masking function parameters in real-time, such that their effects on subjective enhancement quality and intelligibility can be studied interactively. The user may also specify the target TDOA location manually or enable automatic localization where the effect of the sliding window width can be manipulated in real-time. Examples of both static and moving speakers are provided. The software has been tested on a wide range of hardware platforms where performance can be made to degrade smoothly with decreasing computational power by using smaller pre-learned dictionaries, as we showed in Figure 5. Source code for RT-GCC-NMF and iPython notebook demonstrations are made available as open source at https://www.github.com/seanwood/gcc-nmf.
We have presented a low latency speech enhancement algorithm called RT-GCC-NMF, and studied its performance on stereo mixtures of speech and real-world noise.
We showed that by pre-learning the NMF dictionary in a purely unsupervised fashion on a different dataset than used at runtime, RT-GCC-NMF generalizes to new speakers, acoustic environments, and recording setups.
This approach is therefore flexible in terms of training data, where training is not bound to a speaker-specific dataset or to test data from a limited number of speakers.
Also, only a very small amount of unlabelled training data is required, on the order of one thousand 64 ms frames, significantly less than the hours or days of labeled training data required by supervised deep learning approaches to speech enhancement.
RT-GCC-NMF holds significant potential for future research as it is conceptually simple, easy to implement, purely unsupervised, and generalizes to unseen datasets.
A phase-based version of RT-GCC-NMF was developed that bypasses NMF activation coefficient inference, such that only the pre-learned dictionary and input phase differences are required at runtime. This method outperformed all but one algorithm previously submitted to the SiSEC speech enhancement challenge, comparing favourably to the state-of-the-art and the ideal binary mask (IBM) baseline. This approach simultaneously improved both objective speech quality and intelligibility metrics over a wide range of input SNRs. An interesting direction for future work would be to combine phase-based RT-GCC-NMF with other phase-aware approaches to further improve estimation of the Wiener filter, or to estimate the phase spectra itself, as it has been shown that it is possible to outperform the IBM baseline by enhancing the spectral phase prior to reconstruction [59, 60].
We presented a soft NMF activation coefficient masking alternative to the binary coefficient masking function, and showed that the trade-off between interference suppression and target fidelity can be controlled frame-by-frame via the target TDOA window width, with no effect on computational cost. The trade-off between speech quality and speech intelligibility can also be controlled on a frame-by-frame basis via the masking function parameters. In the context of hearing assistive devices, users could therefore be given control of the soft-masking parameters, such that they could be modified depending on their needs for intelligibility, quality, or interference suppression for a given situation.
We drastically reduced the inherent algorithmic latency of RT-GCC-NMF by incorporating an asymmetric STFT windowing scheme proposed by Mauler and Martin . Objective speech enhancement quality and intelligibility metrics were shown to remain unaffected over latencies from 32 ms to 2 ms, where the NMF dictionary atoms adapted to the changing analysis window shapes. This latency reduction falls well within the range of tolerable latencies for hearing aids, i.e. 10 ms or less for the general case.
Finally, we developed an open source implementation of RT-GCC-NMF, allowing subjective analysis of the effects of various system parameters to be studied interactively in real-time via a graphical user interface. We showed that latencies suitable for use in hearing assistive applications were achievable on a variety of hardware platforms ranging from desktop PCs to low-cost embedded system on a chips (SoCs). Future work will include further algorithmic and memory optimizations to run RT-GCC-NMF on lower-power devices suitable for real-world hearing assistive applications.
The authors would like to thank ACELP/CEGI, NSERC, and FQRNT (CHIST-ERA, IGLU) for funding our research, as well as the developers of the open-source PEASS, BSSEval libraries. We would also like to thank William E. Audette and Simon Brodeur for inspiring discussions during the development of this work. The research was enabled in part by support provided by Calcul Quebec (www.calculquebec.ca) and Compute Canada (www.computecanada.ca).
Sean U. N. Wood received the B.A.Sc. degree in engineering science from the University of Toronto, Toronto, ON, Canada in 2004, followed by the M.Sc. degree in computer science at the Montreal Institute for Learning Algorithms (MILA) from the Université de Montréal, Montreal, QC, Canada, in 2010, and the Ph.D. degree in electrical engineering at the Computational Neuroscience and Intelligent Signal Processing Research (NECOTIS) group at the Université de Sherbrooke, Sherbrooke, QC, Canada in 2017. He is currently a Postdoctoral Researcher with the Signal Processing and Speech Communication (SPSC) Laboratory at the Graz University of Technology in Graz, Austria. His research combines machine learning, multi-channel signal processing, and computational neuroscience. He is particularly interested in unsupervised learning algorithms, real-time systems, and applications in speech and biomedical signal processing.
Jean Rouat(S’82-M’88-SM’06) received the M.Sc. degree in physics from the Université de Bretagne, Brest, France, in 1981, the E. & E. M.Sc.A. degree in speech coding and speech recognition from the Université de Sherbrooke, QC, Canada, in 1984, and the E. & E. Ph.D. degree in cognitive and statistical speech recognition jointly with the Université de Sherbrooke and McGill University, Montreal, QC, in 1988. He held a post-doctoral position in psychoacoustics with the MRC, App. Psych. Unit, Cambridge, U.K., and in electrophysiology with the Institute of Physiology, Lausanne, Switzerland. He was an Adjunct Professor with the Department of Biological Sciences, Université de MontrÃ©al from 2007 to 2018. He is currently with the Université de Sherbrooke, where he founded the Computational Neuroscience and Intelligent Signal Processing Research Group, and is Full Member of the Centre for Interdisciplinary Research in Music Media and Technology, Schulich School of Music, McGill University, Montreal. His translational research links neuroscience and engineering for the creation of new technologies and applications with the integration of a better understanding and integration of multimodal representations (vision and audition). Information hiding in multimedia signals, development of hardware low power consumption neural processing units for a sustainable development, interactions with artists for multimedia and musical creations are examples of transfers that he leads based on the knowledge he gains from neuroscience and his knowledge of visual and auditory systems. He is leading machine learning funded projects to develop sensory substitution and intelligent devices. He is also the coordinator of the interdisciplinary IGLU CHIST-ERA European Consortium (IGLU - Interactive Grounded Language Understanding) for the development of an intelligent agent that learns through multimodal grounded interactions.
- publicationid: pubid: 1932-4553 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Citation information: DOI 10.1109/JSTSP.2019.2909193, IEEE Journal of Selected Topics in Signal Processing
- Source code is available at https://www.github.com/seanwood/gcc-nmf
- Preliminary elements of this paper were presented in [13, 14, 15]
- Z.-Q. Wang, J. Le Roux, and J. R. Hershey, “Multi-channel deep clustering: Discriminative spectral and spatial embeddings for speaker-independent speech separation,” Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on, 2018.
- Z.-Q. Wang and D. Wang, “Integrating spectral and spatial features for multi-channel speaker separation,” Proc. Interspeech 2018, pp. 2718–2722, 2018.
- D. S. Williamson and D. Wang, “Time-frequency masking in the complex domain for speech dereverberation and denoising,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 7, pp. 1492–1501, 2017.
- M. Kolbk, Z.-H. Tan, J. Jensen, M. Kolbk, Z.-H. Tan, and J. Jensen, “Speech intelligibility potential of general and specialized deep neural network based speech enhancement systems,” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 25, no. 1, pp. 153–167, 2017.
- S. U. N. Wood, J. Rouat, S. Dupont, and G. Pironkov, “Blind speech separation and enhancement with GCC-NMF,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 4, pp. 745–755, 2017.
- D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Advances in neural information processing systems, 2001, pp. 556–562.
- C. H. Knapp and G. C. Carter, “The generalized correlation method for estimation of time delay,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 24, no. 4, pp. 320–327, 1976.
- D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999.
- M. N. Schmidt and R. K. Olsson, “Single-channel speech separation using sparse non-negative matrix factorization,” in ISCA International Conference on Spoken Language Proceesing (INTERSPEECH), 2006.
- P. Smaragdis, B. Raj, and M. Shashanka, “Supervised and semi-supervised separation of sounds from single-channel mixtures,” Independent Component Analysis and Signal Separation, pp. 414–421, 2007.
- C. Joder, F. Weninger, F. Eyben, D. Virette, and B. Schuller, “Real-time speech separation by semi-supervised nonnegative matrix factorization,” in Latent Variable Analysis and Signal Separation. Springer, 2012, pp. 322–329.
- A. Ozerov, E. Vincent, and F. Bimbot, “A general flexible framework for the handling of prior information in audio source separation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 4, pp. 1118–1133, 2012.
- S. U. N. Wood and J. Rouat, “Real-time speech enhancement with GCC-NMF,” in Proceedings of the Interspeech 2017 Conference, August 2017.
- ——, “Real-time speech enhancement with GCC-NMF: Demonstration on the Raspberry Pi and NVIDIA Jetson,” in Proceedings of the Interspeech 2017 Conference, August 2017.
- ——, “Towards GCC-NMF speech enhancement for hearing assistive devices: Reducing latency with asymmetric windows,” International Workshop on Challenges in Hearing Assistive Technology, 2017.
- B. J. King, “New methods of complex matrix factorization for single-channel source separation and analysis,” Ph.D. dissertation, University of Washington, 2012.
- N. Mohammadiha, P. Smaragdis, and A. Leijon, “Supervised and unsupervised speech enhancement using nonnegative matrix factorization,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 10, pp. 2140–2151, 2013.
- M. A. Stone and B. C. Moore, “Tolerable hearing aid delays. I. estimation of limits imposed by the auditory path alone using simulated hearing losses,” Ear and Hearing, vol. 20, no. 3, p. 182, 1999.
- M. A. Stone and B. C. J. Moore, “Tolerable hearing-aid delays: IV. effects on subjective disturbance during speech production by hearing-impaired subjects,” Ear and Hearing, vol. 26, no. 2, pp. 225–235, 2005.
- R. Herbig and J. Chalupper, “Acceptable processing delay in digital hearing aids,” Hearing Review, vol. 17, no. 1, pp. 28–31, 2010.
- J. Agnew and J. M. Thornton, “Just noticeable and objectionable group delays in digital hearing aids,” Journal of the American Academy of Audiology, vol. 11, no. 6, pp. 330–336, 2000.
- H. Dillon, G. Keidser, A. O’brien, and H. Silberstein, “Sound quality comparisons of advanced hearing aids,” The hearing journal, vol. 56, no. 4, pp. 30–32, 2003.
- D. Mauler and R. Martin, “A low delay, variable resolution, perfect reconstruction spectral analysis-synthesis system for speech enhancement,” in European Signal Processing Conference, 2007, 2007, pp. 222–226.
- A. Liutkus, F.-R. Stöter, Z. Rafii, D. Kitamura, B. Rivet, N. Ito, N. Ono, and J. Fontecave, “The 2016 signal separation evaluation campaign,” in International Conference on Latent Variable Analysis and Signal Separation. Springer, 2017, pp. 323–332.
- X. Anguera, “Robust speaker diarization for meetings,” Ph.D. dissertation, Universitat Politècnica de Catalunya, 2006.
- G. C. Carter, “Coherence and time delay estimation,” Proceedings of the IEEE, vol. 75, no. 2, pp. 236–255, 1987.
- M. Omologo and P. Svaizer, “Acoustic event localization using a crosspower-spectrum phase based technique,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, 1994, pp. II–273.
- M. S. Brandstein and H. F. Silverman, “A robust method for speech signal time-delay estimation in reverberant rooms,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, 1997, pp. 375–378.
- C. Févotte and J. Idier, “Algorithms for nonnegative matrix factorization with the -divergence,” Neural Computation, vol. 23, no. 9, pp. 2421–2456, 2011.
- M. H. Radfar and R. M. Dansereau, “Single-channel speech separation using soft mask filtering,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 8, pp. 2299–2310, 2007.
- A. M. Reddy and B. Raj, “Soft mask methods for single-channel speaker separation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 6, pp. 1766–1776, 2007.
- F. Weninger, J. Le Roux, J. R. Hershey, and S. Watanabe, “Discriminative NMF and its application to single-channel source separation.” in INTERSPEECH, 2014, pp. 865–869.
- E. Vincent, N. Bertin, R. Gribonval, and F. Bimbot, “From blind to guided audio source separation: How models and side information can improve the separation of sound,” IEEE Signal Processing Magazine, vol. 31, no. 3, pp. 107–115, 2014.
- C. Blandin, A. Ozerov, and E. Vincent, “Multi-source TDOA estimation in reverberant audio using angular spectra and clustering,” Signal Processing, vol. 92, no. 8, pp. 1950–1960, 2012.
- J. O. Smith III, Spectral audio signal processing. W3K publishing, 2011.
- A. V. Oppenheim, Discrete-time signal processing. Pearson Education India, 1999.
- E. Allamanche, R. Geiger, J. Herre, and T. Sporer, “MPEG-4 low delay audio coding based on the AAC codec,” in Audio Engineering Society Convention 106. Audio Engineering Society, 1999.
- M. Schnell, M. Schmidt, M. Jander, T. Albert, R. Geiger, V. Ruoppila, P. Ekstrand, and G. Bernhard, “MPEG-4 enhanced low delay AAC-a new standard for high quality communication,” in Audio Engineering Society Convention 125. Audio Engineering Society, 2008.
- L. Su and H.-t. Wu, “Minimum-latency time-frequency analysis using asymmetric window functions,” arXiv preprint arXiv:1606.09047, 2016.
- K. T. Andersen and M. Moonen, “Adaptive time-frequency analysis for noise reduction in an audio filter bank with low delay,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 24, no. 4, pp. 784–795, 2016.
- E. Vincent, S. Watanabe, A. A. Nugraha, J. Barker, and R. Marxer, “An analysis of environment, microphone and data simulation mismatches in robust speech recognition,” Computer Speech & Language, 2016.
- V. Emiya, E. Vincent, N. Harlander, and V. Hohmann, “Subjective and objective quality assessment of audio source separation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 7, pp. 2046–2057, 2011.
- E. Vincent, R. Gribonval, and C. Févotte, “Performance measurement in blind audio source separation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 4, pp. 1462–1469, 2006.
- C. H. Taal, R. C. Hendriks, R. Heusdens, and J. Jensen, “An algorithm for intelligibility prediction of time–frequency weighted noisy speech,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 7, pp. 2125–2136, 2011.
- J. Jensen and C. H. Taal, “An algorithm for predicting the intelligibility of speech masked by modulated noise maskers,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 11, pp. 2009–2022, 2016.
- S. Van Kuyk, W. B. Kleijn, and R. C. Hendriks, “An evaluation of intrusive instrumental intelligibility metrics,” arXiv preprint arXiv:1708.06027, 2017.
- H.-T. T. Duong, Q.-C. Nguyen, C.-P. Nguyen, T.-H. Tran, and N. Q. Duong, “Speech enhancement based on nonnegative matrix factorization with mixed group sparsity constraint,” in Proceedings of the Sixth International Symposium on Information and Communication Technology. ACM, 2015, pp. 247–251.
- Z. Rafii and B. Pardo, “Online REPET-SIM for real-time speech enhancement,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2013, pp. 848–852.
- S. Arberet, A. Ozerov, N. Q. Duong, E. Vincent, R. Gribonval, F. Bimbot, and P. Vandergheynst, “Nonnegative matrix factorization and spatial covariance model for under-determined reverberant audio source separation,” in Information Sciences Signal Processing and their Applications (ISSPA), 2010 10th International Conference on. IEEE, 2010, pp. 1–4.
- L. Le Magoarou, A. Ozerov, and N. Q. Duong, “Text-informed audio source separation using nonnegative matrix partial co-factorization,” in 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2013, pp. 1–6.
- L. Wang, H. Ding, and F. Yin, “A region-growing permutation alignment approach in frequency-domain blind source separation of speech mixtures,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 3, pp. 549–557, 2011.
- (2017, January). [Online]. Available: http://www.onn.nii.ac.jp/sisec13/evaluation_result/BGN/Kayser.txt
- N. Ono, Z. Rafii, D. Kitamura, N. Ito, and A. Liutkus, “The 2015 signal separation evaluation campaign,” in Latent Variable Analysis and Signal Separation. Springer, 2015, pp. 387–395.
- B. King and L. Atlas, “Single-channel source separation using simplified-training complex matrix factorization,” in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE, 2010, pp. 4206–4209.
- N. Ono, Z. Koldovsky, S. Miyabe, and N. Ito, “The 2013 signal separation evaluation campaign,” Proc. International Workshop on Machine Learning for Signal Processing, pp. 1–6, 2013.
- (2017, November). [Online]. Available: http://deeplearning.net/software/theano/
- (2017, November). [Online]. Available: https://riverbankcomputing.com/software/pyqt/intro
- (2017, November). [Online]. Available: http://www.pyqtgraph.org/
- F. Mayer and P. Mowlaee, “Improved phase reconstruction in single-channel speech separation,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
- P. Mowlaee, J. Kulmer, J. Stahl, and F. Mayer, Single Channel Phase-Aware Signal Processing in Speech Communication: Theory and Practice. John Wiley & Sons, 2016.