Multi-scale Multi-band DenseNets for Audio Source Separation
This paper deals with the problem of audio source separation. To handle the complex and ill-posed nature of the problems of audio source separation, the current state-of-the-art approaches employ deep neural networks to obtain instrumental spectra from a mixture. In this study, we propose a novel network architecture that extends the recently developed densely connected convolutional network (DenseNet), which has shown excellent results on image classification tasks. To deal with the specific problem of audio source separation, an up-sampling layer, block skip connection and band-dedicated dense blocks are incorporated on top of DenseNet. The proposed approach takes advantage of long contextual information and outperforms state-of-the-art results on SiSEC 2016 competition by a large margin in terms of signal-to-distortion ratio. Moreover, the proposed architecture requires significantly fewer parameters and considerably less training time compared with other methods.
Multi-scale Multi-band DenseNets for Audio Source Separation
|Naoya Takahashi, Yuki Mitsufuji|
|Sony Corporation, Minato-ku, Tokyo, Japan|
Index Terms— convolutional neural networks, DenseNet, source separation, multi-band
Audio source separation has attracted considerable attention in the last decade. Various approaches have been introduced so far such as local Gaussian modeling [1, 2], non-negative factorization [3, 4, 5], kernel additive modeling  and combinations of those approaches [7, 8, 9]. Recently, deep neural networks (DNNs) based source separation methods has shown significant improvement in separation performance over earlier methods. In [10, 11], a standard feed-forward fully connected network (FNN) was used to obtain the source spectra. As an input for the FNN, multiple frames (typically up to about 20 frames) were concatenated to take advantage of temporal contexts. To model longer contexts, long short term memory (LSTM) was used in . Despite its good performance, the LSTM usually requires a relatively long training time, making it difficult to re-train the network to adapt to different domains or to explore the best architecture.
Another well-known architecture, Convolutional Neural Network (CNN)  has been very successful in image domain and also widely used in a variety of audio and video tasks[14, 15, 16, 17]. As the convolution layers are stacked, a receptive field of the deeper layer covers a larger area of the input field, enabling the deep CNN architecture to take long contexts, as LSTM does. However, considerable depth is required to cover long contexts, making the network training difficult and leading to performance degradation . Recent works, such as ResNets  and Highway Networks , address this problem by bypassing signals from one layer to the next via identity connections; this enable to successfully train the networks with more than 100 layers. Most recently, a novel CNN architecture called densely connected convolutional networks (DenseNet) has shown excellent performance on image recognition task . The idea of DenseNet is to use concatenation of output feature maps of preceding layers as the input to succeeding layers. Unlike ResNet, this iterative connection enables the network to learn explicit cross-layer interactions and reuses features computed in preceding layers, which yields efficient use of parameters. This property suits the audio source separation problem very well because the goal of audio source separation is to estimate the instrumental spectrograms buried in interference sounds and the estimated source spectrograms could be brushed up more easily by referring the mixture or previous layer outputs. However, DenseNet is inherently memory demanding because the number of inter-layer connections grows quadratically with depth. Even for image recognition tasks involving relatively low resolution images (for instance 32 32 input and 10 or 100 output, as in CIFAR  and SVHN ), the authors used pooling layers to overcome the explosion of the number of feature maps. In audio source separation, both the input and output dimension would be far larger (e.g. 1024 frequency bins 128 frames) in order to utilize sufficiently long contexts with high frequency resolution.
To address this problem, we propose a fully convolutional multi-scale DenseNet equipping dense blocks with multiple resolutions. The input of lower resolution dense blocks is created by iteratively down-sampling the outputs from the preceding dense blocks. The low resolution dense blocks are then up-sampled to recover a higher resolution, and the result are fed to higher resolution dense blocks together with the output from the preceding dense blocks having same resolution, as shown in Fig. 2. The lower resolution blocks capture the entire context while the higher resolution blocks recover details of the time-frequency structure in the spectrogram. This architecture enables the network to model both long contexts and fine-grained structures efficiently within a practical model size while maintaining the advantages of DenseNet. In order to increase the modeling capability, we further introduced dense blocks dedicated to particular frequency bands as well as to the entire frequency spectrum. Although convolution along the frequency axis is shown to be effective in the audio domain including speech  and non-speech , local patterns in the spectrogram are often different in different frequency bands: the lower frequency band is more likely to contain high energies, tonalities and long sustained sounds, whereas the higher frequency band tends to have low energies, noise and rapidly decaying sounds. Most kernels in a convolution layer focus on the higher energy band and neglect the lower energy band, which they consequently fail to recover. Therefore, we propose dense blocks dedicated to each band. In combination with a global dense block, the network is thus able to model efficiently both local and global structures.
The contributions of this paper are as follows:
We propose multi-scale fully convolutional networks for audio source separation by extending DenseNet to cover long contexts while enabling the network to model large input and output dimensions.
We further propose to model each frequency band separately, enabling the kernels to focus on particular distribution, which differ for each frequency band.
The proposed method largely outperforms the state of the art that achieved the best score in the Signal Separation Evaluation Campaign (SiSEC) 2016 competition . Moreover, it considerably reduces the training time and the number of parameters in comparison with recently proposed DNN based methods.
2 Multi-scale multi-band DenseNet
In this section, we first summarize the DenseNet architecture. Then, we extend DenseNet by introducing up-scaling blocks and inter block skip connections to deal with the high dimensional inputs and outputs that are inherent to utilize long context with high resolution audio. Next, we introduce a multi-band DenseNet architecture that improves modeling efficiency and capability. Finally, the complete architectures are outlined.
In a standard feed forward network, the output of the th layer is computed as where the network input is denoted as and is a non-linear transformation which can be a composite function of operations such as Batch Normalization (BN) , rectified linear units (ReLU) , pooling, or convolution. In order to mitigate difficulties of training very deep models, ResNet  employs a skip connection which adds an identity mapping of the input to the non-linear transformation:
The skip connection allows the network to propagate the gradient directly to the preceding layers, making the training of deep architectures easier. DenseNet  further improves the information flow between layers by replacing the simple addition of the output of a single preceding layer with a concatenation of all preceding layers:
where denotes the concatenation operation. Such dense connectivity enables all layers not only to receive the gradient directly but also to reuse features computed in preceding layers. This avoids the re-calculation of similar features in different layers, making the network highly parameter efficient. Fig. 1 illustrate the dense block. In DenseNet, comprises of BN, followed by ReLU and convolution with feature maps. In the reminder of this paper, is referred to as growth rate since the number of input feature maps grows linearly with depth in proportion to (e.g. the input of th layer have feature maps).
For image recognition tasks, a pooling layer, which aggregates local activation and maps to the lower dimension, is essential to capture the global information efficiently. A down-sampling layer defined as a 1 1 convolution followed by a 2 2 average pooling layer is introduced to facilitate pooling. By alternately connecting dense blocks and down-sampling layers, the feature map dimension is successively reduced and finally fed to a softmax classification layer after global pooling layer. In the next section, We discuss how to apply these ideas to audio source separation.
2.2 Multi-Scale DenseNet with block skip connection and transposed convolution
Dense blocks and down-sampling layers comprise the down-sampling path of the proposed multi-scale DenseNet. Down-sampled feature maps enable the dense block network to model longer contexts and wider frequency range dependency while alleviating computational expense. In order to recover the original resolution from lower resolution feature maps, we introduce an up-sampling layer defined as a transposed convolution whose filter size is same as the pooling size. We again alternate up-sampling layers and dense blocks to successively recover the higher resolution feature maps. In order to allow forward and backward signal flow without passing though lower resolution blocks, we also introduce inter-block skip connection which directly connect two dense blocks of the same scale. With this connection, dense blocks in the down-sampling path are enabled to receive supervision and send the extracted features without compressing and decompressing them. The idea of the entire architecture is depicted in Fig. 2 in case that the number of different scales is 3 which can be tuned depends on a data complexity and resource availabilities. Hereafter, we refer to this architecture as MDenseNet. Note that the proposed architecture is fully convolutional and thus can be applied to arbitrary input length.
2.3 Multi-band MDenseNet
In the architecture discussed in Sec. 2.2, the kernels of the convolution layer are shared across the entire input field. This is reasonable if the local input patterns appear in any position in the input, as is the case for objects in natural photos. In audio, however, different patterns occur in different frequency bands, though a certain amount of translation of patterns exists, depending on the relatively small pitch shift. Therefore, limiting the band that share the kernels is more suitable for efficiently capturing local patterns. Indeed, limited kernel sharing has been shown to be effective in speech recognition . We split the input into multiple bands and apply multi-scale DenseNet to each band. However, simply splitting frequency band and modeling each band individually may hinder the ability to model the entire structure of spectrogram. Hence, we build in parallel an MDenseNet for the full band input and concatenate its output with outputs from multiple sub-band MDenseNets, as shown in Fig. 3. Note that in this architecture, since fine structure can be captured by band limited MDenseNets, the full band MDenseNet can focus on modeling rough global structure, thus simpler and less expensive model can be used. We refer to the architecture as MMDenseNet.
2.4 Architecture details
Details of the proposed network architectures for audio source separation are described in Table 1. One advantage of MMDenseNet is that we can design suitable architectures for each band individually and assign computational resources according to the importance of each band which may differ depending on the target source or application. In this work, we split the frequency into two bands in the middle and design a relatively larger model for the lower frequency band.
We evaluated our proposed method on DSD100 dataset which is build for SiSEC . The dataset consists of Dev and Test sets with 50 songs each, recorded in stereo format at 44.1kHz sampling frequency. The average duration of songs is about 4 minutes. For each song, the mixture and its four sources, bass, drums, other and vocals, are available. The task is to separate songs into the four source instruments, or simply into the vocals and accompaniment track. We used a spectrogram (sequence of short-time Fourier transform (STFT) magnitudes obtained by using a frame size of 2048 samples with 50% overlap) of the mixture as the input and trained a network to estimate target spectrogram by minimizing the square error between the network output and , where is the frequency bin index, is the frame index and is the index of instruments. The training was conducted with RMSprop , with an initial learning rate of 0.001 and reduced to 0.0001 after the performance saturated. Networks were trained individually for each instrument using data augmentation and the estimates were further enhanced by applying multi-channel Wiener filter (MWF), as in .
3.2 State of the art comparison
|SDR in dB|
We compared our method with other state-of-the-art approaches:
DeepNMF : Non-negative deep network architecture which results from unfolding NMF iterations and untying their parameters.
BLSTM : Three layer bidirectional long short time memory (BLSTM) was used to estimate source spectrogram. This system marked second best score in SiSEC 2016 competition  and can be considered as a good baseline since it also uses MWF, thus the performance difference between these system highlight the effect of our proposed network architectures.
BLEND : This approach linearly blend the estimates of FNN and BLSTM before applying MWF. The best score on SiSEC 2016 competition was obtained with this approach.
Table 2 and Fig. 4 show the signal to distortion ratio (SDR) computed using the BSS Eval toolbox . Among the state-of-the-art baselines, BLEND showed the best performance, which was a fusion of BLSTM and FNN. MDenseNet performed as good as BLSTM, which also utilized MWF. This suggests that the multi-scale architecture successfully learned to utilize long term contexts using the stack of convolution layers instead of the recurrent architecture. This claim will be further investigated in the next subsection. MMDenseNet significantly improved performance and largely outperformed all baselines, showing the effectiveness of the multi-band architecture. We also trained MMDenseNet with the additional dataset, MedleyDB as FNN approach, and denoted it as MMDenseNet+. It further improved performances for all instruments except drums and showed the best overall result. Notably, we obtained 0.97dB improvement on average over the best results of the SiSEC 2016.
3.3 Architecture validation
The proposed multi-sale dense block enables the network to model the signal on different scales, i.e. the global context in the down-scaled blocks and local fine-grained structure in the high resolution blocks. To validate if dense blocks at each scale actually contribute to recovering the target spectrogram, we computed the map-wise l2-norm of filter weights of dense blocks in up-sampling path (dense 5, 6 and 7 in Table 1). The input of dense blocks in the up-sampling path is the concatenation of the output of the preceding up-sampling layer from down scaled block, and the skip connection from the dense block in down-sampling path, as in Fig. 2. By comparing the averaged l2-norm of the filter weights corresponding to the up-sampling path and the skip connection path, we can conjecture the contribution of dense blocks in different scale. Fig.5 shows that the l2-norms of these two path are roughly the same, indicating that every dense block at different scale indeed contributes reasonably. This validate the advantage of the multi-scale DenseNet structure.
3.4 Model efficiency
The proposed architecture encourages feature reuse within and between dense blocks, leading to a compact and efficient model. To verify this, the number of parameters and the model training times are compared in Table 3. The number of parameters of the proposed architectures are significantly less than the baseline methods. MDenseNet achieved comparable performance to the sate-of-the-art BLEND approach with only 1.5% of the parameters, and MMDenseNet largely outperformed BLEND with only 3.6% of the parameters. This demonstrates the compactness and efficiency of the model, which is preferable for deployment. The training time is also significantly less than for the BLSTM and BLEND methods, making it easier to tune the hyper-parameters.
In this paper, we extended DenseNet to tackle the audio source separation problem. The proposed architectures have dense blocks at multiple scales connected though down-sampling and up-sampling layers, which enable the network to efficiently model both fine-grained local structure and global structure. Furthermore, we proposed a multi-band DenseNet to enable kernels in convolution layer to learn more effectively; this showed considerable performance improvement. Experimental results on the SiSEC 2016 DSD100 dataset shows that our approach outperforms the state-of-the-art by a large margin, while reducing the model size and training time significantly.
-  N. Q. K. Duong, E. Vincent, and R. Gribonval, “Under-determined reverberant audio source separation using a full-rank spatial covariance model,” IEEE Trans. Audio, Speech & Language Processing, vol. 18, no. 7, pp. 1830–1840, 2010.
-  D. Fitzgerald, A. Liutkus, and R. Badeau, “PROJET - spatial audio separation using projections,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, March 20-25, 2016, 2016, pp. 36–40.
-  A. Liutkus, D. Fitzgerald, and R. Badeau, “Cauchy nonnegative matrix factorization,” in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA New Paltz, NY, USA, 2015, pp. 1–5.
-  J. LeRoux, J. R. Hershey, and F. Weninger, “Deep NMF for speech separation,” in Proc. ICASSP, 2015, p. 66â70.
-  Y. Mitsufuji, S. Koyama, and H. Saruwatari, “Multichannel blind source separation based on non-negative tensor factorization in wavenumber domain,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, March 20-25, 2016, 2016, pp. 56–60.
-  A. Liutkus, D. Fitzgerald, Z. Rafii, B. Pardo, and L. Daudet, “Kernel additive models for source separation,” IEEE Trans. Signal Processing, vol. 62, no. 16, pp. 4298–4310, 2014.
-  A. Ozerov and C. Févotte, “Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation,” IEEE Trans. Audio, Speech & Language Processing, vol. 18, no. 3, pp. 550–563, 2010.
-  A. Liutkus, D. Fitzgerald, and Z. Rafii, “Scalable audio separation with light kernel additive modelling,” in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, April 19-24, 2015, 2015, pp. 76–80.
-  D. Fitzgerald, A. Liutkus, and R. Badeau, “Projection-based demixing of spatial audio,” IEEE/ACM Trans. Audio, Speech & Language Processing, vol. 24, no. 9, pp. 1560–1572, 2016.
-  A. A. Nugraha, A. Liutkus, and E. Vincent, “Multichannel music separation with deep neural networks,” in Proc. EUSIPCO, 2015.
-  S. Uhlich, F. Giron, and Y. Mitsufuji, “Deep neural network based instrument extraction from music,” in Proc. ICASSP, 2015, pp. 2135–2139.
-  S. Uhlich, M. Porcu, F. Giron, M. Enenkl, T. Kemp, N. Takahashi, and Y. Mitsufuji, “Improving Music Source Separation Based On Deep Networks Through Data Augmentation And Augmentation And Network Blending,” in Proc. ICASSP, 2017, pp. 261–265.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient based learning applied to document recognition,” in Proc. of the IEEE, vol. 86, no. 11, 1998, pp. 2278–2324.
-  T. Sercu, C. Puhrsch, B. Kingsbury, and Y. LeCun, “Very deep multilingual convolutional neural networks for LVCSR,” in Proc. ICASSP, 2016, pp. 4955–4959.
-  N. Takahashi, M. Gygli, B. Pfister, and L. Van Gool, “Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Detection,” in Proc. Interspeech, 2016.
-  F. Korzeniowski and G. Widmer, “A fully convolutional deep auditory model for musical chord recognition,” in Proc. International Workshop on Machine Learning for Signal Processing (MLSP), 2016.
-  N. Takahashi, M. Gygli, and L. Van Gool, “Aenet: Learning deep audio features for video analysis,” arXiv preprint arXiv:1701.00599, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. CVPR, 2016.
-  R. K. Srivastava, K. Greff, and J. Schmidhuber, “Training very deep networks,” in NIPS, 2015.
-  G. Huang, Z. Liu, and K. Q. Weinberger, “Densely connected convolutional networks,” arXiv preprint arXiv:1608.06993, 2016.
-  A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech Report, 2009.
-  Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, , and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
-  A. Liutkus, F.-R. Stöter, Z. Rai, D. Kitamura, B. Rivet, N. Ito, N. Ono, , and J. Fontecave, “The 2016 Signal Separation Evaluation Campaign,” in Proc. LVA/ICA, 2017, pp. 66â–70.
-  S. Ioffe and C. Szegedy, “ Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. ICML, 2015.
-  X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proc. AISTATS, 2011.
-  O. Abdel-Hamid, L. Deng, and D. Yu, “Exploring Convolutional Neural Network Structures and Optimization Techniques for Speech Recognition,” in Proc. Interspeech, 2013.
-  T. Tieleman and G. Hintion, “rmsprop adaptive learning,” Coursera:Neural Networks for Machine Learning, 2012.
-  R. M. Bittner, J. Salamon, M. Tierney, C. C. M. Mauch, , and J. P. Bello, “âMedleyDB: A multitrack dataset for annotation-intensive MIR research,” in Proc.ISMIR, 2014, pp. 66â–70.
-  E. Vincent, R. Gribonval, and C. Févotte, “Performance measurement in blind audio source separation,” IEEE Trans. on Audio, Speech and Language Processing, no. 4, pp. 1462–1469, 2006.