Full-Capacity Unitary Recurrent Neural Networks

Full-Capacity Unitary Recurrent Neural Networks

Scott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, and Les Atlas
Department of Electrical Engineering, University of Washington
{swisdom, tcpowers, atlas}@uw.edu
Mitsubishi Electric Research Laboratories (MERL)
{hershey, leroux}@merl.com
Equal contribution
Abstract

Recurrent neural networks are powerful models for processing sequential data, but they are generally plagued by vanishing and exploding gradient problems. Unitary recurrent neural networks (uRNNs), which use unitary recurrence matrices, have recently been proposed as a means to avoid these issues. However, in previous experiments, the recurrence matrices were restricted to be a product of parameterized unitary matrices, and an open question remains: when does such a parameterization fail to represent all unitary matrices, and how does this restricted representational capacity limit what can be learned? To address this question, we propose full-capacity uRNNs that optimize their recurrence matrix over all unitary matrices, leading to significantly improved performance over uRNNs that use a restricted-capacity recurrence matrix. Our contribution consists of two main components. First, we provide a theoretical argument to determine if a unitary parameterization has restricted capacity. Using this argument, we show that a recently proposed unitary parameterization has restricted capacity for hidden state dimension greater than 7. Second, we show how a complete, full-capacity unitary recurrence matrix can be optimized over the differentiable manifold of unitary matrices. The resulting multiplicative gradient step is very simple and does not require gradient clipping or learning rate adaptation. We confirm the utility of our claims by empirically evaluating our new full-capacity uRNNs on both synthetic and natural data, achieving superior performance compared to both LSTMs and the original restricted-capacity uRNNs.

 

Full-Capacity Unitary Recurrent Neural Networks


  Scott Wisdomthanks: Equal contribution, Thomas Powers, John R. Hershey, Jonathan Le Roux, and Les Atlas Department of Electrical Engineering, University of Washington {swisdom, tcpowers, atlas}@uw.edu Mitsubishi Electric Research Laboratories (MERL) {hershey, leroux}@merl.com

\@float

noticebox[b]30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.\end@float

1 Introduction

Deep feed-forward and recurrent neural networks have been shown to be remarkably effective in a wide variety of problems. A primary difficulty in training using gradient-based methods has been the so-called vanishing or exploding gradient problem, in which the instability of the gradients over multiple layers can impede learning bengio_learning_1994 []; hochreiter_gradient_2001 []. This problem is particularly keen for recurrent networks, since the repeated use of the recurrent weight matrix can magnify any instability.

This problem has been addressed in the past by various means, including gradient clipping pascanu_difficulty_2012 [], using orthogonal matrices for initialization of the recurrence matrix saxe_exact_2013 []; le_simple_2015 [], or by using pioneering architectures such as long short-term memory (LSTM) recurrent networks hochreiter_long_1997 [] or gated recurrent units cho_properties_2014 []. Recently, several innovative architectures have been introduced to improve information flow in a network: residual networks, which directly pass information from previous layers up in a feed-forward network he_deep_2015 [], and attention networks, which allow a recurrent network to access past activations mnih_recurrent_2014 []. The idea of using a unitary recurrent weight matrix was introduced so that the gradients are inherently stable and do not vanish or explode arjovsky_unitary_2016 []. The resulting unitary recurrent neural network (uRNN) is complex-valued and uses a complex form of the rectified linear activation function. However, this idea was investigated using, as we show, a potentially restricted form of unitary matrices.

The two main components of our contribution can be summarized as follows:

1) We provide a theoretical argument to determine the smallest dimension for which any parameterization of the unitary recurrence matrix does not cover the entire set of all unitary matrices. The argument relies on counting real-valued parameters and using Sard’s theorem to show that the smooth map from these parameters to the unitary manifold is not onto. Thus, we can show that a previously proposed parameterization arjovsky_unitary_2016 [] cannot represent all unitary matrices larger than . Thus, such a parameterization results in what we refer to as a restricted-capacity unitary recurrence matrix.

2) To overcome the limitations of restricted-capacity parameterizations, we propose a new method for stochastic gradient descent for training the unitary recurrence matrix, which constrains the gradient to lie on the differentiable manifold of unitary matrices. This approach allows us to directly optimize a complete, or full-capacity, unitary matrix. Neither restricted-capacity nor full-capacity unitary matrix optimization require gradient clipping. Furthermore, full-capacity optimization still achieves good results without adaptation of the learning rate during training.

To test the limitations of a restricted-capacity representation and to confirm that our full-capacity uRNN does have practical implications, we test restricted-capacity and full-capacity uRNNs on both synthetic and natural data tasks. These tasks include synthetic system identification, long-term memorization, frame-to-frame prediction of speech spectra, and pixel-by-pixel classification of handwritten digits. Our proposed full-capacity uRNNs generally achieve equivalent or superior performance on synthetic and natural data compared to both LSTMs hochreiter_long_1997 [] and the original restricted-capacity uRNNs arjovsky_unitary_2016 [].

In the next section, we give an overview of unitary recurrent neural networks. Section 3 presents our first contribution: the theoretical argument to determine if any unitary parameterization has restricted-capacity. Section 4 describes our second contribution, where we show how to optimize a full-capacity unitary matrix. We confirm our results with simulated and natural data in Section 5 and present our conclusions in Section 6.

2 Unitary recurrent neural networks

The uRNN proposed by Arjovsky et al. arjovsky_unitary_2016 [] consists of the following nonlinear dynamical system that has real- or complex-valued inputs of dimension , complex-valued hidden states of dimension , and real- or complex-valued outputs of dimension :

(1)

where if the outputs are real-valued. The element-wise nonlinearity is

(2)

Note that this non-linearity consists in a soft-thresholding of the magnitude using the bias vector . Hard-thresholding would set the output of to if . The parameters of the uRNN are as follows: , unitary hidden state transition matrix; , input-to-hidden transformation; , nonlinearity bias; , hidden-to-output transformation; and , output bias.

Arjovsky et al. arjovsky_unitary_2016 [] propose the following parameterization of the unitary matrix :

(3)

where are diagonal unitary matrices, are Householder reflection matrices householder_unitary_1958 [], is a discrete Fourier transform (DFT) matrix, and is a permutation matrix. The resulting matrix is unitary because all its component matrices are unitary. This decomposition is efficient because diagonal, reflection, and permutation matrices are to compute, and DFTs can be computed efficiently in time using the fast Fourier transform (FFT). The parameter vector consists of real-valued parameters: parameters for each of the diagonal matrices where and parameters for each of the Householder reflection matrices, which are real and imaginary values of the complex reflection vectors : .

3 Estimating the representation capacity of structured unitary matrices

In this section, we state and prove a theorem that can be used to determine when any particular unitary parameterization does not have capacity to represent all unitary matrices. As an application of this theorem, we show that the parameterization (3) does not have the capacity to cover all unitary matrices for . First, we establish an upper bound on the number of real-valued parameters required to represent any unitary matrix. Then, we state and prove our theorem.

Lemma 3.1

The set of all unitary matrices is a manifold of dimension .

Proof: The set of all unitary matrices is the well-known unitary Lie group [gilmore2008lie, , §3.4]. A Lie group identifies group elements with points on a differentiable manifold [gilmore2008lie, , §2.2]. The dimension of the manifold is equal to the dimension of the Lie algebra , which is a vector space that is the tangent space at the identity element [gilmore2008lie, , §4.5]. For , the Lie algebra consists of all skew-Hermitian matrices [gilmore2008lie, , §5.4]. A skew-Hermitian matrix is any such that , where is the conjugate transpose. To determine the dimension of , we can determine the dimension of . Because of the skew-Hermitian constraint, the diagonal elements of are purely imaginary, which corresponds to real-valued parameters. Also, since , the upper and lower triangular parts of are parameterized by complex numbers, which corresponds to an additional real parameters. Thus, is a manifold of dimension .         

Theorem 3.2

If a family of unitary matrices is parameterized by real-valued parameters for , then it cannot contain all unitary matrices.

Proof: We consider a family of unitary matrices that is parameterized by real-valued parameters through a smooth map from the space of parameters to the space of all unitary matrices . The space of parameters is considered as a -dimensional manifold, while the space of all unitary matrices is an -dimensional manifold according to lemma 3.1. Then, if , Sard’s theorem sard_measure_1942 [] implies that the image of is of measure zero in , and in particular is not onto. Since is not onto, there must exist a unitary matrix for which there is no corresponding input such that . Thus, if is such that , the manifold cannot represent all unitary matrices in .         

We now apply Theorem 3.2 to the parameterization (3). Note that the parameterization (3) has real-valued parameters. If we solve for in , we get . Thus, the parameterization (3) cannot represent all unitary matrices for dimension .

4 Optimizing full-capacity unitary matrices on the Stiefel manifold

In this section, we show how to get around the limitations of restricted-capacity parameterizations and directly optimize a full-capacity unitary matrix. We consider the Stiefel manifold of all complex-valued matrices whose columns are orthonormal vectors in tagare_notes_2011 []. Mathematically, the Stiefel manifold is defined as

(4)

For any , any matrix in the tangent space of the Stiefel manifold satisfies tagare_notes_2011 []. The Stiefel manifold becomes a Riemannian manifold when its tangent space is equipped with an inner product. Tagare tagare_notes_2011 [] suggests using the canonical inner product, given by

(5)

Under this canonical inner product on the tangent space, the gradient in the Stiefel manifold of the loss function with respect to the matrix is , where is a skew-Hermitian matrix and with is the usual gradient of the loss function with respect to the matrix tagare_notes_2011 []. Using these facts, Tagare tagare_notes_2011 [] suggests a descent curve along the Stiefel manifold at training iteration given by the matrix product of the Cayley transformation of with the current solution :

(6)

where is a learning rate and . Gradient descent proceeds by performing updates Tagare tagare_notes_2011 [] suggests an Armijo-Wolfe search along the curve to adapt , but such a procedure would be expensive for neural network optimization since it requires multiple evaluations of the forward model and gradients. We found that simply using a fixed learning rate often works well. Also, RMSprop-style scaling of the gradient by a running average of the previous gradients’ norms tieleman_lecture_2012 [] before applying the multiplicative step (6) can improve convergence. The only additional substantial computation required beyond the forward and backward passes of the network is the matrix inverse in (6).

5 Experiments

All models are implemented in Theano theano_development_team_theano:_2016 [], based on the implementation of restricted-capacity uRNNs by arjovsky_unitary_2016 [], available from https://github.com/amarshah/complex_RNN. All code to replicate our results is available from https://github.com/stwisdom/urnn. All models use RMSprop tieleman_lecture_2012 [] for optimization, except that full-capacity uRNNs optimize their recurrence matrices with a fixed learning rate using the update step (6) and optional RMSprop-style gradient normalization.

5.1 Synthetic data

First, we compare the performance of full-capacity uRNNs to restricted-capacity uRNNs and LSTMs on two tasks with synthetic data. The first task is synthetic system identification, where a uRNN must learn the dynamics of a target uRNN given only samples of the target uRNN’s inputs and outputs. The second task is the copy memory problem, in which the network must recall a sequence of data after a long period of time.

5.1.1 System identification

For the task of system identification, we consider the problem of learning the dynamics of a nonlinear dynamical system that has the form (1), given a dataset of inputs and outputs of the system. We will draw a true system randomly from either a constrained set of restricted-capacity unitary matrices using the parameterization in (3) or from a wider set of restricted-capacity unitary matrices that are guaranteed to lie outside . We sample from by taking a matrix product of two unitary matrices drawn from .

We use a sequence length of , and we set the input dimension and output dimension both equal to the hidden state dimension . The input-to-hidden transformation and output-to-hidden transformation are both set to identity, the output bias is set to , the initial state is set to , and the hidden bias is drawn from a uniform distribution in the range . The hidden bias has a mean of to ensure stability of the system outputs. Inputs are generated by sampling -length i.i.d. sequences of zero-mean, diagonal and unit covariance circular complex-valued Gaussians of dimension . The outputs are created by running the system (1) forward on the inputs.

We compare a restricted-capacity uRNN using the parameterization from (3) and a full-capacity uRNN using Stiefel manifold optimization with no gradient normalization as described in Section 4. We choose hidden state dimensions to test critical points predicted by our arguments in Section 3 of in (3): . These dimensions are chosen to test below, at, and above the critical dimension of .

For all experiments, the number of training, validation, and test sequences are , , and , respectively. Mean-squared error (MSE) is used as the loss function. The learning rate is with a batch size of for all experiments. Both models use the same matrix drawn from as initialization. To isolate the effect of unitary recurrence matrix capacity, we only optimize , setting all other parameters to true oracle values. For each method, we report the best test loss over epochs and over random initializations for the optimization.

The results are shown in Table 1. “ init.” refers to the initialization of the true system unitary matrix , which is sampled from either the restricted-capacity set or the wider set .

init. Capacity
Restricted
Full
Restricted
Full
Table 1: Results for system identification in terms of best normalized MSE. is the set of restricted-capacity unitary matrices from (3), and is a wider set of unitary matrices.

Notice that for , the restricted-capacity uRNN achieves comparable or better performance than the full-capacity uRNN. At , the restricted-capacity and full-capacity uRNNs achieve relatively comparable performance, with the full-capacity uRNN achieving slightly lower error. For , the full-capacity uRNN always achieves better performance versus the restricted-capacity uRNN. This result confirms our theoretical arguments that the restricted-capacity parameterization in (3) lacks the capacity to model all matrices in the unitary group for and indicates the advantage of using a full-capacity unitary recurrence matrix.

5.1.2 Copy memory problem

The experimental setup follows the copy memory problem from arjovsky_unitary_2016 [], which itself was based on the experiment from hochreiter_long_1997 []. We consider alternative hidden state dimensions and extend the sequence lengths to and , which are longer than the maximum length of considered in previous literature.

In this task, the data is a vector of length and consists of elements from 10 categories. The vector begins with a sequence of 10 symbols sampled uniformly from categories 1 to 8. The next elements of the vector are the ninth ’blank’ category, followed by an element from the tenth category, the ‘delimiter’. The remaining ten elements are ‘blank’. The task is to output blank characters followed by the sequence from the beginning of the vector. We use average cross entropy as the training loss function. The baseline solution outputs the blank category for time steps and then guesses a random symbol uniformly from the first eight categories. This baseline has an expected average cross entropy of .

Figure 1: Results of the copy memory problem with sequence lengths of 1000 (left) and 2000 (right). The full-capacity uRNN converges quickly to a perfect solution, while the LSTM and restricted-capacity uRNN with approximately the same number of parameters are unable to improve past the baseline naive solution.

The full-capacity uRNN uses a hidden state size of with no gradient normalization. To match the number of parameters (), we use for the restricted-capacity uRNN, and for the LSTM. The training set size is 100000 and the test set size is 10000. The results of the experiment can be found on the left half of Figure 1. The full-capacity uRNN converges to a solution with zero average cross entropy after about 2000 training iterations, whereas the restricted-capacity uRNN settles to the baseline solution of 0.020. The results of the experiment can be found on the right half of Figure 1. The full-capacity uRNN hovers around the baseline solution for about 5000 training iterations, after which it drops down to zero average cross entropy. The restricted-capacity again settles down to the baseline solution of 0.010. These results demonstrate that the full-capacity uRNN is very effective for problems requiring very long memory.

5.2 Speech data

We now apply restricted-capacity and full-capacity uRNNs to real-world speech data and compare their performance to LSTMs. The main task we consider is predicting the log-magnitude of future frames of a short-time Fourier transform (STFT). The STFT is a commonly used feature domain for speech enhancement, and is defined as the Fourier transform of short windowed frames of the time series. In the STFT domain, a real-valued audio signal is represented as a complex-valued matrix composed of frames that are each composed of frequency bins, where is the duration of the time-domain frame. Most speech processing algorithms use the log-magnitude of the complex STFT values and reconstruct the processed audio signal using the phase of the original observations.

The frame prediction task is as follows: given all the log-magnitudes of STFT frames up to time , predict the log-magnitude of the STFT frame at time .We use the TIMIT dataset garofolo_darpa_1993 []. According to common practice halberstadt1998heterogeneous [], we use a training set with 3690 utterances from 462 speakers, a validation set of 400 utterances, an evaluation set of 192 utterances. Training, validation, and evaluation sets have distinct speakers. Results are reported on the evaluation set using the network parameters that perform best on the validation set in terms of the loss function over three training trials. All TIMIT audio is resampled to kHz. The STFT uses a Hann analysis window of samples ( milliseconds) and a window hop of samples ( milliseconds).

The LSTM requires gradient clipping during optimization, while the restricted-capacity and full-capacity uRNNs do not. The hidden state dimensions of the LSTM are chosen to match the number of parameters of the full-capacity uRNN. For the restricted-capacity uRNN, we run models that match either or number of parameters. For the LSTM and restricted-capacity uRNNs, we use RMSprop tieleman_lecture_2012 [] with a learning rate of , momentum , and averaging parameter . For the full-capacity uRNN, we also use RMSprop to optimize all network parameters, except for the recurrence matrix, for which we use stochastic gradient descent along the Stiefel manifold using the update (6) with a fixed learning rate of and no gradient normalization.

Model # parameters Valid.
MSE Eval.
MSE SegSNR
(dB) STOI PESQ
LSTM 84 83k 18.02 18.32 1.95 0.77 1.99
Restricted-capacity uRNN 128 67k 15.03 15.78 3.30 0.83 2.36
Restricted-capacity uRNN 158 83k 15.06 14.87 3.32 0.83 2.33
Full-capacity uRNN 128 83k 14.78 15.24 3.57 0.84 2.40
LSTM 120 135k 16.59 16.98 2.32 0.79 2.14
Restricted-capacity uRNN 192 101k 15.20 15.17 3.31 0.83 2.35
Restricted-capacity uRNN 256 135k 15.27 15.63 3.31 0.83 2.36
Full-capacity uRNN 192 135k 14.56 14.66 3.76 0.84 2.42
LSTM 158 200k 15.49 15.80 2.92 0.81 2.24
Restricted-capacity uRNN 378 200k 15.78 16.14 3.16 0.83 2.35
Full-capacity uRNN 256 200k 14.41 14.45 3.75 0.84 2.38
Table 2: Log-magnitude STFT prediction results on speech data, evaluated using objective and perceptual metrics (see text for description).
Figure 2: Ground truth and one-frame-ahead predictions of a spectrogram for an example utterance. For each model, hidden state dimension is chosen for the best validation MSE. Notice that the full-capacity uRNN achieves the best detail in its predictions.

Results are shown in Table 2, and Figure 2 shows example predictions of the three types of networks. Results in Table 2 are given in terms of the mean-squared error (MSE) loss function and several metrics computed on the time-domain signals, which are reconstructed from the predicted log-magnitude and the original phase of the STFT. These time-domain metrics are segmental signal-to-noise ratio (SegSNR), short-time objective intelligibility (STOI), and perceptual evaluation of speech quality (PESQ). SegSNR, computed using brookes_voicebox:_2002 [], uses a voice activity detector to avoid measuring SNR in silent frames. STOI is designed to correlate well with human intelligibility of speech, and takes on values between 0 and 1, with a higher score indicating higher intelligibility taal_algorithm_2011 []. PESQ is the ITU-T standard for telephone voice quality testing rix_perceptual_2001 []; itu-t_p.862_2000 [], and is a popular perceptual quality metric for speech enhancement loizou_speech_2007 []. PESQ ranges from 1 (bad quality) to 4.5 (no distortion).

Note that full-capacity uRNNs generally perform better than restricted-capacity uRNNs with the same number of parameters, and both types of uRNN significantly outperform LSTMs.

5.3 Pixel-by-pixel MNIST

As another challenging long-term memory task with natural data, we test the performance of LSTMs and uRNNs on pixel-by-pixel MNIST and permuted pixel-by-pixel MNIST, first proposed by le_simple_2015 [] and used by arjovsky_unitary_2016 [] to test restricted-capacity uRNNs. For permuted pixel-by-pixel MNIST, the pixels are shuffled, thereby creating some non-local dependencies between pixels in an image. Since the MNIST images are pixels, resulting pixel-by-pixel sequences are elements long. We use 5000 of the 60000 training examples as a validation set to perform early stopping with a patience of 5. The loss function is cross-entropy. Weights with the best validation loss are used to process the evaluation set. The full-capacity uRNN uses RMSprop-style gradient normalization.

Model # parameters Validation accurary Evaluation accuracy

Unpermuted

LSTM 128  68k 98.1 97.8
LSTM 256 270k 98.5 98.2
Restricted-capacity uRNN 512  16k 97.9 97.5
Full-capacity uRNN 116  16k 92.7 92.8
Full-capacity uRNN 512 270k 97.5 96.9

Permuted

LSTM 128  68k 91.7 91.3
LSTM 256 270k 92.1 91.7
Restricted-capacity uRNN 512  16k 94.2 93.3
Full-capacity uRNN 116  16k 92.2 92.1
Full-capacity uRNN 512 270k 94.7 94.1
Table 3: Results for unpermuted and permuted pixel-by-pixel MNIST. Classification accuracies are reported for trained model weights that achieve the best validation loss.

Learning curves are shown in Figure 3, and a summary of classification accuracies is shown in Table 3. For the unpermuted task, the LSTM with achieves the best evaluation accuracy of . For the permuted task, the full-capacity uRNN with achieves the best evaluation accuracy of , which is state-of-the-art on this task. Both uRNNs outperform LSTMs on the permuted case, achieving their best performance after fewer traing epochs and using an equal or lesser number of trainable parameters. This performance difference suggests that LSTMs are only able to model local dependencies, while uRNNs have superior long-term memory capabilities. Despite not representing all unitary matrices, the restricted-capacity uRNN with still achieves impressive test accuracy of with only of the trainable parameters, outperforming the full-capacity uRNN with that matches number of parameters. This result suggests that further exploration into the potential trade-off between hidden state dimension and capacity of unitary parameterizations is necessary.

Figure 3: Learning curves for unpermuted pixel-by-pixel MNIST (top panel) and permuted pixel-by-pixel MNIST (bottom panel).

6 Conclusion

Unitary recurrent matrices prove to be an effective means of addressing the vanishing and exploding gradient problems. We provided a theoretical argument to quantify the capacity of constrained unitary matrices. We also described a method for directly optimizing a full-capacity unitary matrix by constraining the gradient to lie in the differentiable manifold of unitary matrices. The effect of restricting the capacity of the unitary weight matrix was tested on system identification and memory tasks, in which full-capacity unitary recurrent neural networks (uRNNs) outperformed restricted-capacity uRNNs from arjovsky_unitary_2016 [] as well as LSTMs. Full-capacity uRNNs also outperformed restricted-capacity uRNNs on log-magnitude STFT prediction of natural speech signals and classification of permuted pixel-by-pixel images of handwritten digits, and both types of uRNN significantly outperformed LSTMs. In future work, we plan to explore more general forms of restricted-capacity unitary matrices, including constructions based on products of elementary unitary matrices such as Householder operators or Givens operators.

Acknowledgments: We thank an anonymous reviewer for suggesting improvements to our proof in Section 3 and Vamsi Potluru for helpful discussions. Scott Wisdom and Thomas Powers were funded by U.S. ONR contract number N00014-12-G-0078, delivery orders 13 and 24. Les Atlas was funded by U.S. ARO grant W911NF-15-1-0450.

References

References

  • [1] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.
  • [2] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, eds, A field guide to dynamical recurrent neural networks. IEEE Press, 2001.
  • [3] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training Recurrent Neural Networks. arXiv:1211.5063, Nov. 2012.
  • [4] A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv:1312.6120, Dec. 2013.
  • [5] Q. V. Le, N. Jaitly, and G. E. Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv:1504.00941, Apr. 2015.
  • [6] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • [7] K. Cho, B. van Merriënboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv:1409.1259, 2014.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385, Dec. 2015.
  • [9] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. Recurrent models of visual attention. In Advances in Neural Information Processing Systems (NIPS), pp. 2204–2212, 2014.
  • [10] M. Arjovsky, A. Shah, and Y. Bengio. Unitary Evolution Recurrent Neural Networks. In International Conference on Machine Learning (ICML), Jun. 2016.
  • [11] A. S. Householder. Unitary triangularization of a nonsymmetric matrix. Journal of the ACM, 5(4):339–342, 1958.
  • [12] R. Gilmore. Lie groups, physics, and geometry: an introduction for physicists, engineers and chemists. Cambridge University Press, 2008.
  • [13] A. Sard. The measure of the critical values of differentiable maps. Bulletin of the American Mathematical Society, 48(12):883–890, 1942.
  • [14] H. D. Tagare. Notes on optimization on Stiefel manifolds. Technical report, Yale University, 2011.
  • [15] T. Tieleman and G. Hinton. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude, 2012. COURSERA: Neural Networks for Machine Learning.
  • [16] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv: 1605.02688, May 2016.
  • [17] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, and D. S. Pallett. DARPA TIMIT acoustic-phonetic continous speech corpus. Technical Report NISTIR 4930, National Institute of Standards and Technology, 1993.
  • [18] A. K. Halberstadt. Heterogeneous acoustic measurements and multiple classifiers for speech recognition. PhD thesis, Massachusetts Institute of Technology, 1998.
  • [19] M. Brookes. VOICEBOX: Speech processing toolbox for MATLAB, 2002. [Online]. Available: http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html.
  • [20] C. Taal, R. Hendriks, R. Heusdens, and J. Jensen. An algorithm for intelligibility prediction of time-frequency weighted noisy speech. IEEE Trans. on Audio, Speech, and Language Processing, 19(7):2125–2136, Sep. 2011.
  • [21] A. Rix, J. Beerends, M. Hollier, and A. Hekstra. Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs. In Proc. ICASSP, vol. 2, pp. 749–752, 2001.
  • [22] ITU-T P.862. Perceptual evaluation of speech quality (PESQ): An objective method for end-to-end speech quality assessment of narrow-band telephone networks and speech codecs, 2000.
  • [23] P. C. Loizou. Speech Enhancement: Theory and Practice. CRC Press, Boca Raton, FL, Jun. 2007.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
49230
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description