# DizzyRNN: Reparameterizing Recurrent Neural Networks for Norm-Preserving Backpropagation

###### Abstract

The vanishing and exploding gradient problems are well-studied obstacles that make it difficult for recurrent neural networks to learn long-term time dependencies. We propose a reparameterization of standard recurrent neural networks to update linear transformations in a provably norm-preserving way through Givens rotations. Additionally, we use the absolute value function as an element-wise non-linearity to preserve the norm of backpropagated signals over the entire network. We show that this reparameterization reduces the number of parameters and maintains the same algorithmic complexity as a standard recurrent neural network, while outperforming standard recurrent neural networks with orthogonal initializations and Long Short-Term Memory networks on the copy problem.

Cornell University, Ithaca, NY

## 1 Defining the problem

Recurrent neural networks (RNNs) are trained by updating model parameters through gradient descent with backpropagation to minimize a loss function. However, RNNs in general will not prevent the loss derivative signal from decreasing in magnitude as it propagates through the network. This results in the vanishing gradient problem, where the loss derivative signal becomes too small to update model parameters (Bengio et al., 1994). This hampers training of RNNs, especially for learning long-term dependencies in data.

## 2 Signal scaling analysis

The prediction of an RNN is the result of a composition of linear transformations, element-wise non-linearities, and bias additions. To observe the sources of vanishing and exploding gradient problems in such a network, one can observe the minimum and maximum scaling properties of each transformation independently, and compose the resulting scaling factors.

### 2.1 Linear transformations

Let be an arbitrary linear transformation, where is a matrix of rank .

###### Theorem 1.

The singular value decomposition (SVD) of is , for orthogonal and , and diagonal with diagonal elements , the singular values of .

From the SVD, Corollaries 1 and 2 follow.

###### Corollary 1.

Let and be the minimum and maximum singular values of , respectively. Then .

###### Corollary 2.

Let and be the minimum and maximum singular values of , respectively. Then and are also the minimum and maximum singular values of .

Proofs for these corollaries are deferred to the appendix.

Let be a scalar function of . Then

In an RNN, this relation describes the scaling effect of a linear transformation on the backpropagated signal. By Corollary 2, each linear transformation scales the loss derivative signal by at least the minimum singular value of the corresponding weight matrix and at most by the maximum singular value.

###### Theorem 2.

All singular values of an orthogonal matrix are .

By Corollary 2, if the linear transformation is orthogonal, then the linear transformation will not scale the loss derivative signal.

### 2.2 Non-linear functions

Let be an arbitrary element-wise non-linear transformation. Let be a scalar function of . Then

where denotes the first derivative of and denotes the element-wise product. The -th element of is scaled at least by and at most by .

### 2.3 Bias

Let be an arbitrary addition of bias to . Let be a scalar function of . Then

Though additive bias does not preserve the norm during a forward pass over the network, it does preserve the norm of the backpropagated signal during a backward pass.

## 3 Previous Work

In general, the singular values of weight matrices in RNNs are allowed to vary unbounded, leaving the network susceptible to the vanishing and exploding gradient problems. A popular approach to mitigating this problem is through orthogonal weight initialization, first proposed by Saxe et al. (Saxe et al., 2013). Later, identity matrix initialization was introduced for RNNs with ReLU non-linearities, and was shown to help networks learn longer time dependencies (Le et al., 2015).

Arjovsky et al. (Arjovsky et al., 2015) introduced the idea of an orthogonal reparametrization of weight matrices. Their approach involves composing several simple complex-valued unitary matrices, where each simple unitary matrix is parametrized such that updates during gradient descent happen on the manifold of unitary matrices. The authors prove that their network cannot have an exploding gradient, and believe that this is the first time a non-linear network has been proven to have this property.

Wisdom et al. (Wisdom et al., 2016) note that Arjovsky’s approach does not parametrize all orthogonal matrices, and propose a method of computing the gradients of a weight matrix such that the update maintains orthogonality, but also allows the matrix to express the full set of orthogonal matrices.

Jia et al. (Jia, 2016) propose a method of regularizing the singular values during training by periodically computing the full SVD of the the weight matrices, and clipping the singular values to have some maximum allowed distance from . The authors show this has comparable performance to batch normalization in convolutional neural networks. As computing the SVD is an expensive operation, this approach may not translate well to RNNs with large weight matrices.

## 4 DizzyRNN

We propose a simple method of updating orthogonal linear transformations in an RNN in a way that maintains orthogonality. We combine this approach with the use of the absolute value function as the non-linearity, thus constructing an RNN that provably has no vanishing or exploding gradient. We term an RNN using this approach a Dizzy Recurrent Neural Network (DizzyRNN). The reparameterization maintains the same algorithmic space and time complexity as a standard RNN.

### 4.1 Givens rotation

An orthogonal matrix may be constructed as a product of Givens rotations (Golub & Van Loan, 2012). Each rotation is a sparse matrix multiplication, depending on only two elements and modifying only two elements, meaning each rotation can be performed in time. Additionally, each rotation is represented by one parameter: a rotation angle. These rotation angles can be updated directly using gradient descent through backpropagation.

Let and denote the indices of the fixed dimensions in one rotation, with . Let express this rotation by an angle . The rotation matrix is sparse and orthogonal with the following form: each diagonal element is except for the -th and -th diagonal elements, which are . Additionally, two off-diagonal elements are non-zero; the element is and the element is . All remaining off-diagonal elements are . Let be a scalar function of . Then

Recall that since the matrix is orthogonal with minimum and maximum singular values of , the transpose also has minimum and maximum singular values of (by Corollary 2).

To update the rotation angles, note that the only elements of that differ from the corresponding element of are and . Each can be expressed as and . The derivative of with respect to the parameter is thus

To simplify this expression, define the matrix as

where is a column vector of zeros with a in the -th index. The matrix selects only the -th and -th indices of a vector. Additionally, define the matrix as

Note that always has this form; it does not depend on indices and . Now the derivative of with respect to the parameter can be represented as

This multiplication can be implemented in time.

### 4.2 Parallelization Through Packed Rotations

While the DizzyRNNs maintain the same algorithmic complexity as standard RNNs, it is important to perform as many Givens rotations in parallel as possible in order to get good performance on GPU hardware. Since each Givens rotation only affects two values in the input vector, we can perform Givens rotations in parallel. We therefore only need sequential operations, each of which has computational and space complexity. We refer to each of these operations as a packed rotation, representable by a sparse matrix multiplication.

### 4.3 Norm preserving non-linearity

Typically used non-linearities like tanh and sigmoid strictly reduce the norm of a loss derivative signal during backpropagation. ReLU only preserves the norm in the case that each input element is non-negative. We propose the use of an element-wise absolute value non-linearity (denoted as abs). Let be the element-wise absolute value of , and let be a scalar function of . Then

The use of this non-linearity preserves the norm of the backpropagated signal.

### 4.4 Eliminating Vanishing and Exploding Gradients

Let represent packed rotations, be a hidden state at time step , be an input vector, and be a bias vector. Define the hidden state update equation as

If is square, it can also be represented as packed rotations , resulting in the hidden state update equation

### 4.5 Eliminating Vanishing and Exploding Gradients

Arvosky et al. claim to provide the first proof of a network having no exploding gradient (through their uRNN) (Arjovsky et al., 2015). We show that DizzyRNN has no exploding gradient and, more importantly, no vanishing gradient.

Let a state update equation for an RNN be defined as

Let be a loss function over the RNN.

###### Theorem 3.

In a DizzyRNN cell,

###### Theorem 4.

If is square, then

Therefore, the network can propagate loss derivative signals through an arbitrarily large number of state updates and stacked cells. The proofs for Theorems and are deferred to the Appendix.

## 5 Incorporating Singular Value Regularization

### 5.1 Exposing singular values

Let be an arbitrary linear transformation, where is a matrix of rank . Such a matrix can be represented by the DizzyRNN reparameterization through a construction , where and are orthogonal matrix and is a diagonal matrix. This construction represents a singular value decomposition of a linear transformation; however, the diagonal elements of (the singular values) can be updated directly along with the rotation angles of and . Additionally, the distribution of singular values can be penalized easily, regularizing the network while allowing full expressivity of linear transformations.

### 5.2 Diagonal matrix

A matrix-vector product where is a diagonal matrix can be represented as the element-wise vector product , where is the vector of the diagonal elements of . Let be a scalar function of . Then

Each computation can be performed in time.

### 5.3 Singular value regularization

For a DizzyRNN, an additional term can be added to the loss function to penalize the distance of the singular values of each linear transformation from . For each cell in the stack, let denote the vector of all singular values of all linear transformations in the cell; the regularization term is then , where is a penalty factor and is the vector of all ones. The loss function can now be rewritten as

for a DizzyRNN with a stack height of where represents the vector of all singular values associated with the -th cell in the stack. Note that setting the hyperparameter to allows the singular values to grow or decay unbounded, and setting to constrains each linear transformation to be orthogonal. Additionally, note that initializing the singular values of each linear transformation to is equivalent to an orthogonal initialization of the DizzyRNN.

## 6 Experimental results

We implemented DizzyRNN in Tensorflow and compared the performance of DizzyRNN with standard RNNs, Identity RNNs (Le et al., 2015), and Long Short-Term Memory networks (LSTM) (Hochreiter & Schmidhuber, 1997). We evaluated each network on the copy problem described in (Arjovsky et al., 2015). We modify the loss function in this problem to only quantify error on the copied portion of the output, making our baseline accuracy (guessing at random).

The copy problem for our experiments consisted of memorizing a sequence of 10 one-hot vectors of length 10 and outputting the same sequence (via softmax) upon seeing a delimiter after a time lag of 90 steps.

We use a stack size of and use only a subset of the total possible packed rotations for every orthogonal matrix.

All experiments consist of epochs with 10 batches of size 100, sampled directly from the underlying distribution.

DizzyRNN manages to reach near perfect accuracy in under 100 epochs while other models either fail to break past the baseline or plateau at a low test accuracy. Note that 100 epochs corresponds to 100000 sampled training sequences.

## 7 Conclusion

DizzyRNNs prove to be a promising method of eliminating the vanishing and exploding gradient problems. The key is using pure rotations in combination with norm-preserving non-linearities to force the norm of the backpropagated gradient at each timestep to remain fixed. Surprisingly, at least for the copy problem, restricting weight matrices to pure rotations actually improves model accuracy. This suggests that gradient information is more valuable than model expressiveness in this domain.

Further experimentation with sampling packed rotations will be a topic of future work. Additionally, we would like to augment other state-of-the-art networks with Dizzy reparameterizations, such as Recurrent Highway Networks (Zilly et al., 2016).

## References

- Arjovsky et al. (2015) Arjovsky, Martín, Shah, Amar, and Bengio, Yoshua. Unitary evolution recurrent neural networks. CoRR, abs/1511.06464, 2015. URL http://arxiv.org/abs/1511.06464.
- Bengio et al. (1994) Bengio, Yoshua, Simard, Patrice, and Frasconi, Paolo. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166, 1994.
- Golub & Van Loan (2012) Golub, Gene H and Van Loan, Charles F. Matrix computations, volume 3. JHU Press, 2012.
- Hochreiter & Schmidhuber (1997) Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- Jia (2016) Jia, Kui. Improving training of deep neural networks via singular value bounding. arXiv preprint arXiv:1611.06013, 2016.
- Le et al. (2015) Le, Quoc V., Jaitly, Navdeep, and Hinton, Geoffrey E. A simple way to initialize recurrent networks of rectified linear units. CoRR, abs/1504.00941, 2015. URL http://arxiv.org/abs/1504.00941.
- Saxe et al. (2013) Saxe, Andrew M., McClelland, James L., and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. CoRR, abs/1312.6120, 2013. URL http://arxiv.org/abs/1312.6120.
- Wisdom et al. (2016) Wisdom, Scott, Powers, Thomas, Hershey, John, Le Roux, Jonathan, and Atlas, Les. Full-capacity unitary recurrent neural networks. In Advances In Neural Information Processing Systems, pp. 4880–4888, 2016.
- Zilly et al. (2016) Zilly, Julian Georg, Srivastava, Rupesh Kumar, Koutník, Jan, and Schmidhuber, Jürgen. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.

## Appendix

Let be an arbitrary linear transformation, where is a matrix of rank .

###### Theorem 1.

The singular value decomposition of is expressed as , for orthogonal and , and diagonal . and are the -th columns of and , respectively, and is the -th diagonal element of . The columns of are the left singular vectors, the columns of are the right singular vectors, and the diagonal elements of are the singular values.

###### Corollary 1.

Let and be the minimum and maximum singular values of , respectively. Then .

###### Proof.

Express as , i.e. a linear combination of the orthonormal set of left singular vectors. The -th coefficient of the combination is the scalar . Since the set of left singular vectors is orthonormal, the -norm of is the Pythagorean sum of the coefficients of the linear combination, i.e., . Note that the term is the magnitude of the projection of onto . Without loss of generality, fix the norm of to . Let and be the right singular vectors corresponding to and , respectively. Then, the norm of is minimized for parallel to , and maximized for parallel to . The corresponding norms of are and . Since the set of right singular vectors is orthonormal, , and the corresponding norms of are and .

∎

###### Corollary 2.

Let and be the minimum and maximum singular values of , respectively. Then and are also the minimum and maximum singular values of .

###### Proof.

Since where and are orthogonal, . By the same construction as in the previous corollary, if , then . For all such that , the quantity is minimized and maximized for and , respectively, where and are the left singular vectors corresponding to and . The corresponding minimum and maximum norms are and .

∎

Let a state update equation for an RNN be defined as

Let be a loss function over the RNN.

###### Theorem 3.

In a DizzyRNN cell,

###### Proof.

Let and express

The -norms of each side of this equation are

In a DizzyRNN, is the absolute value function, thus the elements of are or . is orthogonal since it is defined by a composition of Givens rotations. Neither and scale the norm of the vector , thus

∎

If instead is the ReLU non-linearity, then is only equal to in the case where all values in are non-negative, resulting in a diminishing gradient in all other cases.

###### Theorem 4.

If is square, then

###### Proof.

By symmetry with the proof of Theorem 3, the norms are shown to be equal. ∎