Approximating Real-Time Recurrent Learning with Random Kronecker Factors

Approximating Real-Time Recurrent Learning with Random Kronecker Factors

Asier Mujika
Department of Computer Science
ETH Zürich, Switzerland
asierm@inf.ethz.ch
&Florian Meier
Department of Computer Science
ETH Zürich, Switzerland
meierflo@inf.ethz.ch
&Angelika Steger
Department of Computer Science
ETH Zürich, Switzerland
steger@inf.ethz.ch
Author was supported by grant no. CRSII5_173721 of the Swiss National Science Foundation.
Abstract

Despite all the impressive advances of recurrent neural networks, sequential data is still in need of better modelling. Truncated backpropagation through time (TBPTT), the learning algorithm most widely used in practice, suffers from the truncation bias, which drastically limits its ability to learn long-term dependencies.The Real-Time Recurrent Learning algorithm (RTRL) addresses this issue, but its high computational requirements make it infeasible in practice. The Unbiased Online Recurrent Optimization algorithm (UORO) approximates RTRL with a smaller runtime and memory cost, but with the disadvantage of obtaining noisy gradients that also limit its practical applicability. In this paper we propose the Kronecker Factored RTRL (KF-RTRL) algorithm that uses a Kronecker product decomposition to approximate the gradients for a large class of RNNs. We show that KF-RTRL is an unbiased and memory efficient online learning algorithm. Our theoretical analysis shows that, under reasonable assumptions, the noise introduced by our algorithm is not only stable over time but also asymptotically much smaller than the one of the UORO algorithm. We also confirm these theoretical results experimentally. Further, we show empirically that the KF-RTRL algorithm captures long-term dependencies and almost matches the performance of TBPTT on real world tasks by training Recurrent Highway Networks on a synthetic string memorization task and on the Penn TreeBank task, respectively. These results indicate that RTRL based approaches might be a promising future alternative to TBPTT.

 

Approximating Real-Time Recurrent Learning with Random Kronecker Factors


  Asier Mujika thanks: Author was supported by grant no. CRSII5_173721 of the Swiss National Science Foundation. Department of Computer Science ETH Zürich, Switzerland asierm@inf.ethz.ch Florian Meier Department of Computer Science ETH Zürich, Switzerland meierflo@inf.ethz.ch Angelika Steger Department of Computer Science ETH Zürich, Switzerland steger@inf.ethz.ch

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Processing sequential data is a central problem in the field of machine learning. In recent years, Recurrent Neural Networks (RNN) have achieved great success, outperforming all other approaches in many different sequential tasks like machine translation, language modeling, reinforcement learning and more.

Despite this success, it remains unclear how to train such models. The standard algorithm, Truncated Back Propagation Through Time (TBPTT) [19], considers the RNN as a feed-forward model over time with shared parameters. While this approach works extremely well in the range of a few hundred time-steps, it scales very poorly to longer time dependencies. As the time horizon is increased, the parameters are updated less frequently and more memory is required to store all past states. This makes TBPTT ill-suited for learning long-term dependencies in sequential tasks.

An appealing alternative to TBPTT is Real-Time Recurrent Learning (RTRL) [20]. This algorithm allows online updates of the parameters and learning arbitrarily long-term dependencies by exploiting the recurrent structure of the network for forward propagation of the gradient. Despite its impressive theoretical properties, RTRL is impractical for decently sized RNNs because run-time and memory costs scale poorly with network size.

As a remedy to this issue, Tallec and Ollivier [17] proposed the Unbiased Online Recurrent Learning algorithm (UORO). This algorithm unbiasedly approximates the gradients, which reduces the run-time and memory costs such that they are similar to the costs required to run the RNN forward. Unbiasedness is of central importance since it guarantees convergence to a local optimum. Still, the variance of the gradients slows down learning.

Here we propose the Kronecker Factored RTRL (KF-RTRL) algorithm. This algorithm builds up on the ideas of the UORO algorithm, but uses Kronecker factors for the RTRL approximation. We show both theoretically and empirically that this drastically reduces the noise in the approximation and greatly improves learning. However, this comes at the cost of requiring more computation and only being applicable to a class of RNNs. Still, this class of RNNs is very general and includes Tanh-RNN and Recurrent Highway Networks [21] among others.

The main contributions of this paper are:

  • We propose the KF-RTRL online learning algorithm.

  • We theoretically prove that our algorithm is unbiased and under reasonable assumptions the noise is stable over time and asymptotically by a factor smaller that the noise of UORO.

  • We test KF-RTRL on a binary string memorization task where our networks can learn dependencies of up to steps.

  • We evaluate on character-level Penn TreeBank, where the performance of KF-RTRL almost matches the one of TBPTT for steps.

  • We empirically confirm that the variance of KF-RTRL is stable over time and that increasing the number of units does not increase the noise significantly.

2 Related Work

Training Recurrent Neural Networks for finite length sequences is currently almost exclusively done using BackPropagation Through Time [16] (BPTT). The network is "unrolled" over time and is considered as a feed-forward model with shared parameters (the same parameters are used at each time step). Like this, it is easy to do backpropagation and exactly calculate the gradients in order to do gradient descent.

However, this approach does not scale well to very long sequences, as the whole sequence needs to be processed before calculating the gradients, which makes training extremely slow and very memory intensive. In fact, BPTT cannot be applied to an online stream of data. In order to circumvent this issue, Truncated BackPropagation Through Time [19] (TBPTT) is used generally. The RNN is only "unrolled" for a fixed number of steps (the truncation horizon) and gradients beyond these steps are ignored. Therefore, if the truncation horizon is smaller than the length of the dependencies needed to solve a task, the network cannot learn it.

Several approaches have been proposed to deal with the truncation horizon. Anticipated Reweighted Truncated Backpropagation [18] samples different truncation horizons and weights the calculated gradients such that the expected gradient is that of the whole sequence. Jaderberg et al. [6] proposed Decoupled Neural Interfaces, where the network learns to predict incoming gradients from the future. Then, it uses these predictions for learning. The main assumption of this model is that all future gradients can be computed as a function of the current hidden state.

A more extreme proposal is calculating the gradients forward and not doing any kind of BPTT. This is known as Real-Time Recurrent Learning [20] (RTRL). RTRL allows updating the model parameters online after observing each input/output pair; we explain it in detail in Section 3. However, its large running time of order and memory requirements of order , where is the number of units of a fully connected RNN, make it unpractical for large networks. To fix this, Tallec and Ollivier [17] presented the Unbiased Online Recurrent Optimization (UORO) algorithm. This algorithm approximates RTRL using a low rank matrix. This makes the run-time of the algorithm of the same order as a single forward pass in an RNN, . However, the low rank approximation introduces a lot of variance, which negatively affects learning as we show in Section 5.

Other alternatives are Reservoir computing approaches [9] like Echo State Networks [7] or Liquid State Machines [10]. In these approaches, the recurrent weights are fixed and only the output connections are learned. This allows online learning, as gradients do not need to be propagated back in time. However, it prevents any kind of learning in the recurrent connections, which makes the RNN computationally much less powerful.

3 Real-Time Recurrent Learning and UORO

RTRL [20] is an online learning algorithm for RNNs. Contrary to TBPPT, no previous inputs or network states need to be stored. At any time-step , RTRL only requires the hidden state , input and in order to compute . With at hand, is obtained by applying the chain rule. Thus, the parameters can be updated online, that is, for each observed input/output pair one parameter update can be performed.

In order to present the RTRL update precisely, let us first define an RNN formally. An RNN is a differentiable function , that maps an input , a hidden state and parameters to the next hidden state . At any time-step , RTRL computes by applying the chain rule:

(1)
(2)

where the middle term vanishes because we assume that the inputs do not depend on the parameters. For notational simplicity, define , and , which reduces the above equation to

(3)

Both and are straight-forward to compute for RNNs. We assume to be fixed, which implies . With all this, RTRL obtains the exact gradient for each time-step and enables online updates of the parameters. However, updating the parameters means that is only exact in the limit were the learning rate is arbitrarily small. In practice learning rates are sufficiently small such that this is not an issue.

The downside of RTRL is that for a fully connected RNN with units the matrices and have size and , respectively. Therefore, computing Equation 3 takes operations and requires storage, which makes RTRL impractical for large networks.

The UORO algorithm [17] addresses this issue and reduces run-time and memory requirements to at the cost of obtaining an unbiased but noisy estimate of . More precisely, the UORO algorithm keeps an unbiased rank-one estimate of by approximating as the outer product of two vectors of size and size , respectively. At any time , the UORO update consists of two approximation steps. Assume that the unbiased approximation of is given. First, is approximated by a rank-one matrix. Second, the sum of two rank-one matrices is approximated by a rank-one matrix yielding the estimate of . The estimate is provably unbiased and the UORO update requires the same run-time and memory as updating the RNN [17].

4 Kronecker Factored RTRL

Our proposed Kronecker Factored RTRL algorithm (KF-RTRL) is an online learning algorithm for RNNs, which does not require storing any previous inputs or network states. KF-RTRL approximates , which is the derivative of the internal state with respect to the parameters, see Section 3, by a Kronecker product. The following theorem shows that the KF-RTRL algorithm satisfies various desireable properties.

Theorem 1.

For the class of RNNs defined in Lemma 1, the estimate obtained by the KF-RTRL algorithm satisfies

  1. is an unbiased estimate of , that is , and

  2. assuming that the spectral norm of is at most for some arbitrary small , then at any time , the mean of the variances of the entries of is of order .

Moreover, one time-step of the KF-RTRL algorithm requires operations and memory.

We remark that the class of RNNs defined in Lemma 1 contains many widely used RNN architectures like Recurrent Highway Networks and Tanh-RNNs, but does not include GRUs [4] or LSTMs [5]. Further, the assumption that the spectral norm of is at most is reasonable, as otherwise gradients might grow exponentially as noted by Bengio et al. [2]. Lastly, the bottleneck of the algorithm is a matrix multiplication and thus for sufficiently large matrices an algorithm with a better run time than may be be practical.

In the remainder of this section, we explain the main ideas behind the KF-RTRL algorithm (formal proofs are given in the appendix). In the subsequent Section 5 we show that these theoretical properties carry over into practical application. KF-RTRL is well suited for learning long-term dependencies (see Section 5.1) and almost matches the performance of TBPTT on a complex real world task, that is, character level language modeling (see Section 5.2). Moreover, we confirm empirically that the variance of the KF-RTRL estimate is stable over time and scales well as the network size increases (see Section 5.3).

Before giving the theoretical background and motivating the key ideas of KF-RTRL, we give a brief overview of the KF-RTRL algorithm. At any time-step , KF-RTRL maintains a vector and a matrix , such that satisfies . Both and are factored as a Kronecker product, and then the sum of these two Kronecker products is approximated by one Kronecker product. This approximation step (see Lemma 2) works analogously to the second approximation step of the UORO algorithm (see rank-one trick, Proposition in [17]). The detailed algorithmic steps of KF-RTRL are presented in Algorithm 1 and motivated below.

  Inputs: input , target , previous recurrent state and parameters and such that and : stochastic optimizer and its learning rate
  Outputs: new recurrent state and updated parameters and such that
  /* Run one step of the RNN and compute the necessary matrices*/
   Compute using , and
   Concatenate and
                
  /* Compute variance minimization and random multipliers */
         
   Independent random signs
  /* Compute next approximation */
         
  /* Compute gradients and update parameters */       
         
Algorithm 1 — One step of KF-RTRL (from time to )

Theoretical motivation of the KF-RTRL algorithm

The key observation that motivates our algorithm is that for many popular RNN architectures can be exactly decomposed as the Kronecker product of a vector and a diagonal matrix, see Lemma 1. Such a decomposition exists if every parameter affects exactly one element of assuming is fixed. This condition is satisfied by many popular RNN networks like Tanh-RNN and Recurrent Highway Networks. The class of RNNs considered in the following lemma contains all these RNN architectures.

Lemma 1.

Assume the learnable parameters are a set of matrices , let be the hidden state concatenated with the input and let for . Assume that is obtained by point-wise operations over the ’s, that is, , where is such that is bounded by a constant. Let be the diagonal matrix defined by , and let . Then, it holds .

Further, we observe that the sum of two Kronecker products can be approximated by a single Kronecker product. The following lemma, which is the analogue of Proposition in [15] for Kronecker products, states how this is achieved.

Lemma 2.

Let , where the matrix has the same size as the matrix and has the same size as . Let and be chosen independently and uniformly at random from and let be positive reals. Define and . Then, is an unbiased approximation of , that is . Moreover, the variance of this approximation is minimized by setting the free parameters .

Lastly, we show by induction that random vectors and random matrices exist, such that satisfies . Assume that satisfies . Equation 3 and Lemma 1 imply that

(4)

Next, by linearity of expectation and since the first dimension of is , it follows

(5)

Finally, we obtain by Lemma 2 for any

(6)

where the expectation is taken over the probability distribution of , , and .

With these observations at hand, we are ready to present the KF-RTRL algorithm. At any time-step we receive the estimates and from the previous time-step. First, compute , and . Then, choose and uniformly at random from and compute

(7)
(8)

where and . Lastly, our algorithm computes , which is used for optimizing the parameters. For a detailed pseudo-code of the KF-RTRL algorithm see Algorithm 1. In order to see that is an unbiased estimate of , we apply once more linearity of expectation: .

One KF-RTRL update has run-time and requires memory. In order to see the statement for the memory requirement, note that all involved matrices and vectors have elements, except . However, we do not need to explicitly compute in order to obtain , because can be evaluated in this order. In order to see the statement for the run-time, note that and have both size . Therefore, computing requires operations. All other arithmetic operations trivially require run-time .

The proofs of Lemmas 1 and 2 and of the second statement of Theorem 1 are given in the appendix.

Comparison of the KF-RTRL with the UORO algorithm

Since the variance of the gradient estimate is directly linked to convergence speed and performance, let us first compare the variance of the two algorithms. Theorem 1 states that the mean of the variances of the entries of are of order . In the appendix, we show a slightly stronger statement, that is, if for all , then the mean of the variances of the entries is of order , where is the number of elements of . The bound is obtained by a trivial bound on the size of the entries of and and using . In the appendix, we show further that already the first step of the UORO approximation, in which is approximated by a rank-one matrix, introduces noise of order . Assuming that all further approximations would not add any noise, then the same trivial bounds on yield a mean variance of . We conclude that the variance of KF-RTRL is asymptotically by (at least) a factor smaller than the variance of .

Next, let us compare the generality of the algorithm when applying it to different network architectures. The KF-RTRL algorithms requires that in one time-step each parameter only affects one element of the next hidden state (see Lemma 1). Although many widely used RNN architectures satisfy this requirement, seen from this angle, the UORO algorithm is favorable as it is applicable to arbitrary RNN architectures.

Finally, let us compare memory requirements and runtime of KF-RTRL and UORO. In terms of memory requirements, both algorithms require storage and perform equally good. In terms of run-time, KF-RTRL requires , while UORO requires operations. However, the faster run-time of UORO comes at the cost of worse variance and therefore worse performance. In order to reduce the variance of UORO by a factor , one would need independent samples of . This could be achieved by reducing the learning rate by a factor of , which would then require times more data, or by sampling times in parallel, which would require times more memory. Still, our empirical investigation shows, that KF-RTRL outperforms UORO, even when taking UORO samples of to reduce the variance (see Figure 3). Moreover, for sufficiently large networks the scaling of the KF-RTRL run-time improves by using fast matrix multiplication algorithms.

5 Experiments

In this section, we quantify the effect on learning that the reduced variance of KF-RTRL compared to the one of UORO has. First, we evaluate the ability of learning long-term dependencies on a deterministic binary string memorization task. Since most real world problems are more complex and of stochastic nature, we secondly evaluate the performance of the learning algorithms on a character level language modeling task, which is a more realistic benchmark. For these two tasks, we also compare to learning with Truncated BPTT. Finally, we investigate the variance of KF-RTRL and UORO by comparing to their exact counterpart, RTRL. For all experiments, we use a single-layer Recurrent Highway Network [21]. 111For implementation simplicity, we use instead of as non-linearity function. Both functions have very similar properties, and therefore, we do not believe this has any significant effect.

5.1 Copy Task

In the copy task experiment, a binary string is presented sequentially to an RNN. Once the full string has been presented, it should reconstruct the original string with out any further information. Figure 5.1 shows several input-output pairs. We refer to the length of the string as . Figure 5.1 summarizes the results. The smaller variance of KF-RTRL greatly helps learning faster and capturing longer dependencies. KF-RTRL and UORO manage to solve the task on average up to and , respectively. As expected, TBPTT cannot learn dependencies that are longer than the truncation horizon.

. subfigure                      Input: #01101------------ Output: ------------#01101 Input: #11010------------ Output: ------------#11010 Input: #00100------------ Output: ------------#00100 . subfigure Figure 1: Copy Task: Figure 5.1 shows the learning curves of UORO, KF-RTRL and TBPTT with truncation horizon of steps. We plot the mean an standard deviation (shaded area) over trials. Figure 5.1 shows three input and output examples with . In this experiment, we start with and when the RNN error drops below bits/char, we increase by one. After each sequence, the hidden state is reset to all zeros. To improve performance, the length of each sample is picked uniformly at random from to . This forces the network to learn a general algorithm for the task, rather than just learning to solve sequences of length . We use a RHN with units and a batch size of . We optimize the log-likelihood using the Adam optimizer [8] with default Tensorflow [1] parameters and . For each model we pick the optimal learning rate from {, , , }. We repeat each experiment times.

5.2 Character level language modeling on the Penn Treebank dataset

A standard test for RNNs is character level language modeling. The network receives a text sequentially, character by character, and at each time-step it must predict the next character. This task is very challenging, as it requires both long and short term dependencies. Additionally, it is highly stochastic, i.e. for the same input sequence there are many possible continuations, but only one is observed at each training step. Figure 2 and Table 1 summarize the results. Truncated BPTT outperforms both online learning algorithms, but KF-RTRL almost reaches its performance and considerably outperforms UORO. For this task, the noise introduced in the approximation is more harmful than the truncation bias. This is probably the case because the short term dependencies dominate the loss, as indicated by the small difference between TBPTT with truncation horizon and . For this experiment we use the Penn TreeBank [11] dataset, which is a collection of Wall Street Journal articles. The text is lower cased and the vocabulary is restricted to K words. Out of vocabulary words are replaced by "<unk>" and numbers by "N". We split the data following Mikolov et al. [14]. The experimental setup is the same as in the Copy task, and we pick the optimal learning rate from the same range. Apart from that, we reset the hidden state to the all zeros state with probability at each time step. This technique was introduced by Melis et al. [12] to improve the performance on the validation set, as the initial state for the validation is the all zeros state. Additionally, this helps the online learning algorithms, as it resets the gradient approximation, getting rid of stale gradients. Similar techniques have been shown [3] to also improve RTRL. Figure 2: Validation performance on Penn TreeBank in bits per character (BPC). The small variance of the KF-RTRL approximation considerably improves the performance compared to UORO. Table 1: Results on Penn TreeBank. Merity et al. [13] is currently the state of the art (trained with TBPTT). For simplicity we do not report standard deviations, as all of them are smaller than . Name Validation Test #params KF-RTRL 1.77 1.72 K UORO 2.63 2.61 K TBPTT-5 1.64 1.58 K TBPTT-25 1.61 1.56 K Merity et al. [13] - 1.18 13.8M

5.3 Variance Analysis

With our final set of experiments, we empirically measure how the noise evolves over time and how it is affected by the number of units . Here, we also compare to taking samples of UORO, we refer to this as UORO-AVG. This brings the computation cost on par with that of KF-RTRL, . For each experiment, we compute the angle between the gradient estimate and the exact gradient of the loss with respect to the parameters. Intuitively, measures how aligned the gradients are, even if the magnitude is different. Figure 3 shows that is stable over time and the noise does not accumulate for any of the three algorithms. Figure 3 shows that KF-RTRL scales better with the number of units than both UORO-AVG and UORO. In the first experiment, we run several untrained RHNs with units over the first characters of Penn TreeBank. In the second experiment, we compute after running RHNs with different number of units for steps on Penn TreeBank. We perform repetitions per experiment and plot the mean and standard deviation.
Figure 3: Variance analysis: We compare the cosine of the angle between the approximated and the true value of . A cosine of implies that the approximation and the true value are exactly aligned, while a random vector gets a cosine of in expectation. Figure 3 shows that the variance is stable over time for the three algorithms. Figure 3 shows that the variance of KF-RTRL is almost unaffected by the number of units, while UORO and UORO-AVG degrade more quickly as the network size increases.

6 Conclusion

In this paper, we have presented the KF-RTRL online learning algorithm. We have proven that it approximates RTRL in an unbiased way, and that under reasonable assumptions the noise is stable over time and much smaller than the one of UORO, the only other previously known unbiased RTRL approximation algorithm. Additionally, we have empirically verified that the reduced variance of our algorithm greatly improves learning for the two tested tasks. In the first task, an RHN trained with KF-RTRL effectively captures long-term dependencies (it learns to memorize binary strings of length up to ). In the second task, it almost matches the performance of TBPTT in a standard RNN benchmark, character level language modeling on Penn TreeBank. More importantly, our work opens up interesting directions for future work, as even minor reductions of the noise could make the approach a viable alternative to TBPTT, especially for tasks with inherent long-term dependencies. For example constraining the weights, constraining the activations or using some form of regularization could reduce the noise. Further, it may be possible to design architectures that make the approximation less noisy. Moreover, one might attempt to improve the run-time of KF-RTRL by using approximate matrix multiplication algorithms or inducing properties on the matrix that allow for fast matrix multiplications, like sparsity or low-rank. This work advances the understanding of how unbiased gradients can be computed, which is of central importance as unbiasedness is essential for theoretical convergence guarantees. Since RTRL based approaches satisfy this key assumption, it is of interest to further progress them.

References

  • Abadi et al. [2016] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensorflow: A system for large-scale machine learning. In OSDI, volume 16, pages 265–283, 2016.
  • Bengio et al. [1994] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166, 1994.
  • Catfolis [1993] T. Catfolis. A method for improving the real-time recurrent learning algorithm. Neural Networks, 6(6):807–821, 1993.
  • Chung et al. [2014] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
  • Hochreiter and Schmidhuber [1997] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • Jaderberg et al. [2016] M. Jaderberg, W. M. Czarnecki, S. Osindero, O. Vinyals, A. Graves, D. Silver, and K. Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. arXiv preprint arXiv:1608.05343, 2016.
  • Jaeger [2001] H. Jaeger. The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148(34):13, 2001.
  • Kingma and Ba [2014] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Lukoševičius and Jaeger [2009] M. Lukoševičius and H. Jaeger. Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127–149, 2009.
  • Maass et al. [2002] W. Maass, T. Natschläger, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural computation, 14(11):2531–2560, 2002.
  • Marcus et al. [1993] M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993.
  • Melis et al. [2017] G. Melis, C. Dyer, and P. Blunsom. On the state of the art of evaluation in neural language models. arXiv preprint arXiv:1707.05589, 2017.
  • Merity et al. [2018] S. Merity, N. S. Keskar, and R. Socher. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240, 2018.
  • Mikolov et al. [2012] T. Mikolov, I. Sutskever, A. Deoras, H.-S. Le, S. Kombrink, and J. Cernocky. Subword language modeling with neural networks. preprint (http://www. fit. vutbr. cz/imikolov/rnnlm/char. pdf), 2012.
  • Ollivier et al. [2015] Y. Ollivier, C. Tallec, and G. Charpiat. Training recurrent networks online without backtracking. arXiv preprint arXiv:1507.07680, 2015.
  • Rumelhart et al. [1986] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. nature, 323(6088):533, 1986.
  • Tallec and Ollivier [2017a] C. Tallec and Y. Ollivier. Unbiased online recurrent optimization. arXiv preprint arXiv:1702.05043, 2017a.
  • Tallec and Ollivier [2017b] C. Tallec and Y. Ollivier. Unbiasing truncated backpropagation through time. arXiv preprint arXiv:1705.08209, 2017b.
  • Williams and Peng [1990] R. J. Williams and J. Peng. An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Computation, 2:490–501, 1990.
  • Williams and Zipser [1989] R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270–280, 1989.
  • Zilly et al. [2016] J. G. Zilly, R. K. Srivastava, J. Koutník, and J. Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.

Appendix A Appendix

In this appendix, we prove all the lemmas and theorems whose proofs has been omitted in the main paper. For the ease of readability we restate the statement object for proving in the beginning of each section.

a.1 Basic Notation

The Hilbert-Schmid norm of a matrix is defined as and the Hilbert-Schmid inner product of two matrices of the same size is defined as . When regarding an matrix as a point in , then the standard euclidian norm of this point is the same as the Hilbert-Schmid norm of the matrix. Therefore, for notational simplicity, we omit the subscript and write and . Note that the Hilbert-Schmid norm satisfies . Further, we measure the variance of a random matrix by the sum of the variances of its entries: (9)

a.2 Proof of Lemma 1

Lemma.
Assume the learnable parameters are a set of matrices , let be the hidden state concatenated with the input and let for . Assume that is obtained by point-wise operations over the ’s, that is, , where is such that is bounded by a constant. Let be the diagonal matrix defined by , and let . Then, it holds .
Proof.
Note that only depends on if and , that , and that if . Therefore (10) where is the Kronecker delta, which is if and if . If we assume that the parameters are ordered lexicographically in , , , then . ∎

a.3 Proof of Lemma 2

As mentioned in the paper Lemma 2 is essentially borrowed from [15]. We state the lemma slightly more general as in the paper, that is, for arbitrary many summands.
Lemma 3.
Let , where the ’s are of the same size and the are of the same size. Let the be chosen independently and uniformly at random from and let be positive reals. Define and . Then, is an unbiased approximation of , that is . The free parameters can be chosen to minimize the variance of . For the optimal choice it holds (11)
Proof.
The independence of the implies that if and if . Therefore, the first claim follows easily by linearity of expectation: For the proof of the second claim we use Proposition 1 from [15]. Let , where denotes the vector obtained by concatenating the columns of a matrix , and let . and have same entries for the same choice of ’s. It follows that , , and therefore . By Proposition 1 from [15], choosing minimizes resulting in . This implies the Lemma because and for any matrices and . ∎

a.4 Proof of Theorem 1

The spectral norm for matrix A is defined as . Note that holds for any matrix .
Theorem 2.
Let be arbitrary small. Assume for all that the spectral norm of is at most , and . Then for the class of RNNs defined in Lemma 1, the estimate obtained by the KF-RTRL algorithm satisfies at any time that .
Before proving this theorem let us show how it implies Theorem 1. Note that the hidden state and the inputs take values between and . Therefore, . By Lemma 1 the non-zero entries of are of the form . By the assumptions on the entries of are bounded and follows. Theorem 2 implies that . Since the number of entries in is of order , the mean of the variances of the entries of is of order .
Proof of Theorem 2.
The proof idea goes as follows. Write as the sum of the true (deterministic) value of the gradient and the random noise induced by the approximations until time . Note that . Then, write as the sum of the variance induced by the -th time step and the variance induced by previous steps. The bound on the spectral norm of ensures that the latter summand can be bounded by . Therefore the variance stays of the same order of magnitude as the one induced in each time-step and this magnitude can be bounded as well. Now let us prove the statement formally. Define (12) By equation Equation 7 and 8 (13) (14) Observe that (15) which implies together with Equation 3 that (16) It follows that .
Claim 1.
For two random matrices and and chosen uniformly at random in independent from and , it holds .
We postpone the proof and first show the theorem. Claim 1 implies that (17) Let us first bound the first term. Since is unbiased, it holds , , and therefore A bound for the second term can be obtained by the triangle inequality: (18) (19) (20) where the last inequality follows the following claim.
Claim 2.
holds for all time-step .
Let us postpone the proof and show by induction that . Assume this is true for , then (21) (22) (23) (24) which implies the theorem. Let us first prove Claim 1. Note that holds for any random variable , and therefore (25) (26) (27) (28) It remains to prove Claim 2. ∎
We show this claim by induction over . For this is true since is the all zero matrix. For the induction step let us assume that . Using our update rules for and (see Equation 7 and 8) and the triangle inequality we obtain and . It follows that (29) (30) (31) (32) (33) (34) (35)

a.5 Computation of Variance of UORO Approach

In the first approximation step of the UORO algorithm is approximated by a rank one matrix , where is chosen uniformly at random from . For the RNN architectures considered in this paper, is a concatenation of diagonal matrices, cf. Lemma 1. Intuitively, all the non-diagonal elements of the UORO approximation are far off the true value . Therefore, the variance per entry introduced in this step will be of order of the diagonal entries of . More precisely, it holds that (36) where we used that is diagonal, if