What if Neural Networks had SVDs?

What if Neural Networks had SVDs?

Abstract

Various Neural Networks employ time-consuming matrix operations like matrix inversion. Many such matrix operations are faster to compute given the Singular Value Decomposition (SVD). Techniques from Zhang et al. (2018); Mhammedi et al. (2017) allow using the SVD in Neural Networks without computing it. In theory, the techniques can speed up matrix operations, however, in practice, they are not fast enough. We present an algorithm that is fast enough to speed up several matrix operations. The algorithm increases the degree of parallelism of an underlying matrix multiplication where is an orthogonal matrix represented by a product of Householder matrices.

1 Introduction

Figure 1: Time consumption of matrix inversion in Neural Networks. The plot compares FastH against the sequential algorithm from Zhang et al. (2018) (see Section 4).

What could be done if the Singular Value Decomposition (SVD) of the weights in a Neural Network was given? Time-consuming matrix operations, such as matrix inversion Hoogeboom et al. (2019), could be computed faster, reducing training time. However, on weight matrices it takes time to compute the SVD, which is not faster than computing the matrix inverse in time. In Neural Networks, one can circumvent the SVD computation by using the SVD reparameterization from Zhang et al. (2018), which, in theory, reduces the time complexity of matrix inversion from to . However, in practice, the SVD reparameterization attains no speed-up for matrix inversion on GPUs.

The difference between theory and practice occurs because the previous technique increase sequential work, which is not taken into account by the time complexity analysis. On a weight matrix, the previous technique entails the computation of sequential inner products, which is ill-fit for parallel hardware like a GPU because the GPU cannot utilize all its cores. For example, if a GPU has 4000 cores and computes sequential inner products on 100-dimensional vectors, it can only utilize 100 cores simultaneously, leaving the remaining 3900 cores to run idle.

We introduce a novel algorithm, FastH, which increases core utilization, leaving less cores to run idle. This is accomplished by increasing the degree of parallelization of an underlying matrix multiplication where is an orthogonal matrix represented by a product of Householder matrices. FastH retains the same desirable time complexity as the sequential algorithm from Zhang et al. (2018) while reducing the number of sequential operations. On a mini-batch of size , FastH performs sequential matrix-matrix operations instead of sequential vector-vector operations.

In practice, FastH is faster than all algorithms from Zhang et al. (2018), e.g., FastH is times faster than their sequential algorithm, see Figure 1. Code www.github.com/AlexanderMath/fasth.

2 Background

2.1 Fast Matrix Operations Using SVD

The SVD allows faster computation of many matrix operations commonly used by Neural Networks. A few examples include matrix determinant Dinh et al. (2015), matrix inverse Kingma and Dhariwal (2018), Spectral Normalization Miyato et al. (2018), the matrix exponential Lezcano-Casado and Martínez-Rubio (2019), the Cayley transform Golinski et al. (2019), weight decay, condition number and compression by low-rank approximation Xue et al. (2013). Proofs can be found in most linear algebra textbooks, see, e.g., Strang (2006).

2.2 The SVD Reparameterization

This subsection describes how Zhang et al. (2018) allows for using the SVD of the weight matrices in Neural Networks without computing them, and in particular, how this approach is limited by the computation of sequential inner products. Let be the SVD of a weight matrix where is a diagonal matrix and are orthogonal matrices, i.e, and . The goal is to perform gradient descent updates to while preserving the SVD. Consider updating a small step in the direction of gradients .

While remains diagonal, both and are in general not orthogonal, which is needed to preserve the SVD. To this end, Zhang et al. (2018) suggested using a technique from Mhammedi et al. (2017) which decomposes an orthogonal matrix as a product of Householder matrices :

(1)

Householder matrices satisfy several useful properties. In particular, the matrix remains orthogonal under gradient descent updates Mhammedi et al. (2017). Furthermore, all products of Householder matrices are orthogonal, and any orthogonal matrix can be decomposed as a product of Householder matrices Uhlig (2001). Householder matrices thus allow us to perform gradient descent over orthogonal matrices, which allows us to preserve the SVD of during gradient descent updates.

Multiplication.

One potential issue remains. The Householder decomposition might increase the time it takes to multiply for a mini-batch during the forward pass. Computing takes Householder multiplications. If done sequentially, as indicated by the parenthesis, each Householder multiplication can be computed in time Zhang et al. (2018). All multiplications can thus be done in time. Therefore, the Householder decomposition does not increase the time complexity of computing .

Unfortunately, the time complexity comes at the cost of multiplying each Householder matrix sequentially, and each Householder multiplication entails computing an inner product, see Equation 1. The multiplication then requires the computation of inner products sequentially. Such sequential computation is slow on parallel hardware like GPUs, much slower than normal matrix multiplication. To exploit GPUs, Zhang et al. (2018) suggested using a parallel algorithm that takes time, but this is no faster than computing the SVD.

We are thus left with two options: (1) an sequential algorithm and (2) an parallel algorithm. The first option is undesirable since it entails the sequential computation of inner products. The second option is also undesirable since it takes which is the same as computing the SVD, i.e., we might as-well just compute the SVD. In practice, both algorithms usually achieve no speed-up for the matrix operations like matrix inversion as we show in Section 4.2.

Our main contribution is a novel parallel algorithm, FastH, which resolves the issue with sequential inner products without increasing the time complexity. FastH takes time but performs sequential matrix-matrix operations instead of sequential vector-vector operations (inner products). In practice, FastH is up to faster than the parallel algorithm and up to faster than the sequential algorithm, see Section 4.1.

Mathematical Setting.

We compare the different methods by counting the number of sequential matrix-matrix and vector-vector operations. We count only once when other sequential operations can be done in parallel. For example, processing sequentially while, in parallel, processing sequentially, we count sequential vector-vector operations.

Orthogonal Gradient Descent.

The SVD reparameterization performs gradient descent over orthogonal matrices. This is possible with Householder matrices, however, other techniques, such as Casado (2019); Li et al. (2020), rely on the matrix exponential and the Cayley map, respectively. For matrices both techniques spend time, which is no faster than computing the SVD.

3 Fast Householder Multiplication (FastH)

3.1 Forward Pass

Our goal is to create an algorithm with few sequential operations that solves the following problem: Given an input with and Householder matrices , compute the output . For simplicity, we assume divides .

Since each is a matrix, it would take time to read the input . Therefore, we represent each Householder matrix by its Householder vector such that . A simplified version of the forward pass of FastH proceeds as follows: divide the Householder product into smaller products so each is a product of Householder matrices:

(2)

All products can be computed in parallel. The output can then be computed by instead of , which reduces the number of sequential matrix multiplications from to .

This algorithm computes the correct . However, the time complexity increases due to two issues. First, multiplying each product with takes time, a total of time for all products. Second, we need to compute all products in time, so each product must be computed in time. If we only use the Householder structure, it takes time to compute each , which is not fast enough.

Both issues can be resolved, yielding an algorithm. The key ingredient is a linear algebra result Bischof and Van Loan (1987) that dates back to 1987. The result is restated in Lemma 1.

Lemma 1.

For any Householder matrices there exists st. . Computing and takes time and sequential Householder multiplications.

For completeness, we provide pseudo-code in Algorithm 1. Theorem 1 states properties of Algorithm 1 and its proof clarify how Lemma 1 solves both issues outlined above.

Theorem 1.

Algorithm 1 computes in time with sequential matrix multiplications.

Proof.

Correctness. Each iteration of Step 2 in Algorithm 1 utilizes Lemma 1 to compute . Therefore, at termination, . In Step 1, we used Lemma 1 to compute the ’s such that as wanted.

Time Complexity. Consider the for loop in Step 1. By Lemma 1, each iteration takes time. Therefore, the total time of the iterations is . Consider iteration of the loop in Step 2. The time of the iteration is asymptotically dominated by both matrix multiplications. Since and are matrices, it takes time to compute both matrix multiplications. There are iterations so the total time becomes .

Number of Sequential Operations. Each iteration in Step 2 performs two sequential matrix multiplications. There are sequential iterations which gives a total of sequential matrix multiplications. Each iteration in Step 1 performs sequential Householder multiplications to construct , see Lemma 1. Since each iteration is run in parallel, the algorithm performs no more than sequential matrix multiplications. ∎

Remark.

Section 3.2 extends the techniques from this section to handle gradient computations. For simplicity, this section had Algorithm 1 compute only , however, in Section 3.2 it will be convenient to assume are precomputed. Each can be saved during Step 2 of Algorithm 1 without increasing asymptotic memory consumption.

(a) Step 1: Sequential part of Algorithm 2.
(b) Step 2: The ’th subproblem in Algorithm 2.
Figure 2: Computational graph of Step 1 and the ’th subproblem in Step 2 from Algorithm 2.

3.2 Backwards Propagation

This subsection extends the techniques from Section 3.1 to handle gradient computations. Our goal is to create an algorithm with few sequential operations that solves the following problem: Given , and for some loss function , compute and , where is a Householder vector st. .

Since each is a matrix, it would take time to read the input . Therefore, we represent each by its WY decomposition .

On a high-level the backward pass of FastH has two steps.

Step 1.

Sequentially compute , by

(3)

This also gives the gradient wrt. since .

Step 2.

Use from Step 1 to compute the gradient for all . This problem can be split into subproblems, which can be solved in parallel, one subproblem for each .

Details.

For completeness, we state pseudo-code in Algorithm 2, which we now explain with the help of Figure 2. Figure 1(a) depicts a computational graph of Step 1 in Algorithm 2. In the figure, consider and , which both have directed edges to a multiplication node (denoted by ). The output of this multiplication is by Equation 3. This can be repeated to obtain .

Step 2 computes the gradient of all Householder vectors . This computation is split into distinct subproblems that can be solved in parallel. Each subproblem concerns and the product , see line 8-10 in Algorithm 2.

To ease notation, we index the Householder matrices of by . Furthermore, we let and . The notation implies that . The goal of each subproblem is to compute gradients wrt. the Householder vectors of . To compute the gradient of , we need and , which can be computed by:

(4)

Figure 1(b) depicts how and can be computed given and . Given and , we can compute as done in Zhang et al. (2018); Mhammedi et al. (2017). For completeness, we restate the needed equation in our notation, see Equation 5.

Let be the ’th column of and let be the ’th column of . The sum of the gradient over a mini-batch of size is then:

(5)

Theorem 2 states properties of Algorithm 2.

Theorem 2.

Algorithm 2 computes and in time with sequential matrix multiplications.

Proof.

See the Supplementary Material 8.1. ∎

1:  Input: and .
2:  Output: .
3:  // Step 1
4:  for  to  do in parallel
5:      Compute st. \hfill by using Lemma 1.
6:  end for
7:  // Step 2
8:  
9:  for  to  do sequentially
10:      . \hfill
11:  end for
12:  return .
Algorithm 1 FastH Forward
\hfill
1:  Input: , and .
2:  Output: and for all where .
3:  // Step 1
4:  for  to  do sequentially
5:       eq. 3. \hfill
6:  end for
7:  // Step 2
8:  for  to  do in parallel
9:      Let .
10:      To ease notation, let .
11:      for  to  do
12:          Compute see eq. 4. \hfill
13:          Compute using , eq. 5. \hfill
14:      end for
15:  end for
16:  return and for all .
Algorithm 2 FastH Backward

3.3 Extensions

Trade-off.

Both Algorithm 1 and Algorithm 2 can be extended to take a parameter that controls a trade-off between total time complexity and the amount of parallelism. This can be achieved by changing the number of Householder matrices in each product from the mini-batch size to an integer . The resulting algorithms take time, space and has sequential matrix multiplications. This extension has the practical benefit that one can try different values of and choose the one that yields superior performance on a particular hardware setup. Note that we never need to search for more than one time. The number of sequential matrix multiplications is minimized when . For a constant , we can find the best by trying all values. The search needs to be done only once and takes time. In practice, the time it took to find was negligable, e.g., on the hardware we describe in Section 4 we found in less than for .

Rectangular Matrices.

We can use the SVD reparametrization for rectangular . Use orthogonal and diagonal and let .

Convolutional Layers.

So far, we have considered the SVD reparameterization for matrices which corresponds to fully connected layers. The matrix case extends to convolutions by, e.g., convolutions Kingma and Dhariwal (2018). The SVD reparameterization can be used for such convolutions without increasing the time complexity. On an input with height and width FastH performs sequential matrix multiplications instead of the sequential inner products.

Recurrent Layers.

The SVD reparameterization was developed for Recurrent Neural Networks (RNNs) Zhang et al. (2018). Let be the number of recurrent applications of the RNN. FastH performs sequential matrix operations instead of the sequential inner products.

4 Experiments

This section contains two experiments. Section 4.1 compares the running time of FastH against alternatives. Section 4.2 shows that FastH speeds up matrix operations. To simulate a realistic machine learning environment, we performed all experiments on a standard machine learning server using a single NVIDIA RTX 2080 Ti.

4.1 Comparing Running Time

This subsection compares the running time of FastH against four alternative algorithms. We compare the time all algorithms spend on gradient descent with a single orthogonal matrix, since such constrained gradient descent dominates the running time of the SVD reparameterization.

We first compare FastH against the parallel and sequential algorithm from Zhang et al. (2018), all three algorithms rely on the Householder decomposition. For completeness, we also compare against approaches that does not rely on the Householder decomposition, in particular, the matrix exponential and the Cayley map Casado (2019)5. See Supplementary Material 8.2 for further details.

We measure the time of a gradient descent step with a weight matrix and a mini-batch , where and . We ran each algorithm times, and we report mean time with error bars where is the standard deviation of running time over the repetitions.

Figure 2(a) depicts the running time on the y-axis, as the size of the matrices increases on the x-axis. For , FastH is faster than all previous approaches. At FastH is faster than all previous approaches, except the parallel algorithm. Previous work employ sizes in Kingma and Dhariwal (2018) and in Zhang et al. (2018).

(a) Running time.
(b) Relative improvement.
Figure 3: Comparisons of the running times for FastH against previous algorithms. The sequential algorithm from Zhang et al. (2018) crashed when . (a) Running times of different algorithms for matrices. (b) Running times of FastH relative to previous algorithms, i.e., the mean time of a previous algorithm divided by the mean time of FastH.

Figure 2(b) depicts how much faster FastH is relative to the previous algorithms, i.e., the mean time of a previous algorithm divided by the time of FastH, which we refer to as relative improvement. For , the relative improvement of FastH increases with .

At FastH is roughly faster than the sequential algorithm. FastH is even faster with than the sequential algorithm with . Previous work like Hoogeboom et al. (2019); van den Berg et al. (2018) use the Householder decomposition with the sequential algorithm. Since FastH computes the same thing as the sequential algorithm, it can be used to reduce computation time with no downside.

Matrix Operation Standard Method SVD or Eigendecomposition
Determinant torch.slogdet(W)
Inverse torch.inverse(W)
Matrix Exponential Padé Approximation Casado (2019)
Cayley map torch.solve(I-W, I+W)
Table 1: Relating standard method to matrix decompositions for computing matrix operations.

4.2 Using the SVD to Compute Matrix Operations

This subsection investigates whether the SVD reparameterization achieves practical speed-ups for matrix operations like matrix inversion. We consider four different matrix operations. For each operation, we compare the SVD reparameterization against the standard method for computing the specific matrix operation, see Table 1.

Figure 4: Running time of matrix operations. Solid lines depict approaches which use the SVD reparameterization and dashed lines depict standard methods like torch.inverse.

Timing the Operations.

The matrix operations are usually used during the forward pass of a Neural Network, which change the subsequent gradient computations. We therefore measure the sum of the time it takes to compute the matrix operation, the forward pass and the subsequent gradient computations.

For example, with matrix inversion, we measure the time it takes to compute the matrix operation , the forward pass and the subsequent gradient computation wrt. and . The measured time is then compared with torch.inverse, i.e, we compare against the total time it takes to compute torch.inverse(W), the forward pass , and the subsequent gradient computation wrt. and .

Setup.

We run the SVD reparameterization with three different algorithms: FastH, the sequential and the parallel algorithm from Zhang et al. (2018). For each matrix operation, we consider matrices and , where and . We repeat the experiment times, and report the mean time with error bars where is the standard deviation of the running times over the repetitions. To avoid clutter, we plot only the time of FastH for the matrix operation it is slowest to compute, and the time of the sequential and parallel algorithms for the matrix operation they were fastest to compute.

Figure 4 depicts the measured running time on the y-axis with the size of the matrices increasing on the x-axis. Each matrix operation computed by a standard method is plotted as a dashed line, and the different algorithms for the SVD reparameterization are plotted as solid lines. In all cases, FastH is faster than the standard methods. For example, with , FastH is faster than the Cayley map, faster than the matrix exponential, faster than inverse and faster than matrix determinant. The sequential algorithm is not fast enough to speed up any matrix operation.

5 Related Work

The Householder Decomposition.

The Householder decomposition of orthogonal matrices has been used in much previous works, e.g., Tomczak and Welling (2016); Mhammedi et al. (2017); Zhang et al. (2018); van den Berg et al. (2018); Hoogeboom et al. (2019). Previous work typically use a type of sequential algorithm that performs sequential inner products. To circumvent the resulting long computation time on GPUs, previous work often suggest limiting the number of Householder matrices, which limits the expressiveness of the orthogonal matrix, introducing a trade-off between computation time and expressiveness.

FastH takes the same asymptotic time as the sequential algorithm, however, it performs less sequential matrix operations, making it up to faster in practice. Since FastH computes the same output as the previous sequential algorithms, it can be used in previous work without degrading the performance of their model. In particular, FastH can be used to either (1) increase expressiveness at no additional computational cost or (2) retain the same level of expresiveness at lower computational cost.

SVDs in Neural Networks.

The authors of Zhang et al. (2018) introduced a technique that provides access to the SVD of the weights in a Neural Network without computing the SVD. Their motivation for developing this technique was the exploding/vanishing gradient issue in RNNs. In particular, they use the SVD reparameterization to force all singular values to be within the range for some small .

We point out that although their technique, in theory, can be used to speed up matrix operations, their algorithms are too slow to speed-up most matrix operations in practice. To mitigate this problem, we introduce a new algorithm that is more suitable for GPUs, which allows us to speed up several matrix operations in practice.

Different Orthogonal Parameterizations.

The SVD reparameterization by Zhang et al. (2018) uses the Householder decomposition to perform gradient descent with orthogonal matrices. Their work was followed by Golinski et al. (2019) that raises a theoretical concern about the use of Householder decomposition. Alternative approaches based on the matrix exponential and the Cayley map have desirable provable guarantees, which currently, it is not known whether the Householder decomposition possesses. This might make it desirable to use the matrix exponential or the Cayley map together with the SVD reparameterization from Zhang et al. (2018). However, previous work spend time to compute or approximate the matrix exponential and the Cayley map. These approaches are therefore undesirable, because they share the time complexity with SVD and thus cannot speed up SVD computations.

Normalizing Flows.

Normalizing Flows Dinh et al. (2015) is a type of generative model that, in some cases Kingma and Dhariwal (2018); Hoogeboom et al. (2019), entails the computation of matrix determinant and matrix inversion. Kingma and Dhariwal (2018) propose to use the PLU decomposition where is a permutation matrix and are lower and upper triangular. The decomposition allows the determinant computation in time instead of . Hoogeboom et al. (2019) point out that a fixed permutation matrix limits flexibility. To fix this issue, they suggest using the decomposition where is a rectangular matrix and is orthogonal. They suggest making orthogonal by using the Householder decomposition which FastH can speed up. Alternatively, one could use the SVD decomposition instead of the QR or PLU decomposition.

6 Code

To make FastH widely accessible, we wrote a PyTorch implementation of the SVD reparameterization which uses the FastH algorithm. The implementation can be used by changing just a single line of code, i.e, change nn.Linear to LinearSVD. While implementing FastH, we found that Python did not provide an adequate level of parallelization. We therefore implemented FastH in CUDA to fully utilize the parallel capabilities of GPUs. Code: www.github.com/AlexanderMath/fasth/.

7 Conclusion

We pointed out that, in theory, the techniques from Zhang et al. (2018); Mhammedi et al. (2017) allow for decreasing the time complexity of several matrix operations used in Neural Networks. However, in practice, we demonstrated that the techniques are not fast enough on GPUs for moderately sized use-cases. We proposed a novel algorithm, FastH, that remedies the issues with both algorithms from Zhang et al. (2018), which is up to faster than the previous sequential algorithm. FastH introduces no loss of quality, it computes the same result as the previous algorithms, just faster. FastH brings two immediate benefits: (1) improves upon the techniques from Zhang et al. (2018) in such a way that it is possible to speed up matrix operations, and (2) speeds up previous work that employ the Householder decomposition as done in, e.g., Tomczak and Welling (2016); van den Berg et al. (2018); Hoogeboom et al. (2019).

Broader Impact

Our algorithm speeds up the use of Householder decompositions in Neural Networks. This can positively impact researchers who use Householder decompositions, since they will be able to execute experiments faster. This is particularly beneficial for researchers with a constraint on their computational budget, in other words, a PhD student with one GPU stands to benefit more than a lab with state-of-the-art GPU computing infrastructure. The reduction in computing time also decrease power consumption and thus carbon emissions. However, as a potential negative impact, it is possible that the decrease in computation time will increase the usage of Neural Networks and thus increase overall carbon emission.

8 Supplementary Material

8.1 Proof of Theorem 2.

Theorem.

Algorithm 2 computes and in time with sequential matrix multiplications.

Proof.

Correctness. FastH computes gradients by the same equations as [17], so in most cases, we show correctness by clarifying how FastH computes the same thing, albeit faster.

Consider computed in Step 1:

This is the same as that computed in [17].

Consider Step 2. Both and are computed as done in [17]. is computed using Equation 4 similar to backpropagation without storing activations [5], but using the fact that .

Time Complexity. In Step 1, the for loop performs matrix multiplications. Due to the WY decomposition which can be multiplied on in time since . The computation is repeated times, and take a total of time.

Step 2 line 12 in Algorithm 3 performs two Householder matrix multiplications which take time, see Equation 4. In line 13, the gradient is computed by Equation 5, this sum also takes time to compute. Both computations on line 12 and 13 are repeated times, see line 8 and line 11. Therefore, the total time is .

Number of Sequential Operations. Step 1 performs sequential matrix operations. Lines 11-14 of Step 2 perform sequential matrix multiplications. Since each iteration of line 8-15 is run in parallel, the algorithm performs no more than sequential matrix multiplications. ∎

1:  Input: , and .
2:  Output: and for all where .
3:  // Step 1
4:  for  to  do sequentially
5:       eq. 3. \hfill
6:  end for
7:  // Step 2
8:  for  to  do in parallel
9:      Let .
10:      To ease notation, let .
11:      for  to  do
12:          Compute see eq. 4. \hfill
13:          Compute using , eq. 5. \hfill
14:      end for
15:  end for
16:  return and for all .
Algorithm 3 FastH Backward

8.2 Comparing Running Time

This subsection clarifies how the matrix exponential and the Cayley map was used in combination with the SVD reparameterization from [17]. It also provides further details on the exact computations we timed in the experiment. These details were left out of the main article as they require the introduction of some notation regarding a reparameterization function.

Let be a weight matrix and let be a function that reparameterizes so is orthogonal, and we can perform gradient descent wrt. . The Householder decomposition can be used to construct such a function , by letting the columns of be Householder vectors and be the product of the resulting Householder matrices.

There exist alternative ways of constructing which does not rely on the Householder decomposition. For example, the matrix exponential approach where and the Cayley map approach where [2].

We record the joint time it takes to compute and the gradients wrt. and for a dummy input . To simplify the gradient computation of , we use a dummy gradient st. the gradient wrt. is . It might be useful to think of as the gradient that arises by back-propagating through a Neural Network.

Both the dummy input and the dummy gradient have normally distributed entries .

Implementation Details.

The parallel algorithm from [17] halted for larger values of . The failing code was not part of the main computation. This allowed us to remove the failing code and still get a good estimate of the running time of the parallel algorithm. We emphasize that removing the corresponding code makes the parallel algorithm faster. The experiments thus demonstrated that FastH is faster than a lower bound on the running time of the parallel algorithm.

8.3 Using the SVD to Compute Matrix Operations

This section requires first reading Section 4.1 and Section 4.2. Recall that we, in Section 4.2, want to measure the total time it takes to compute both the matrix operation, the forward pass and the gradient computations. For example, with matrix inversion, we want to compute the matrix operation , the forward pass and the gradient computations wrt .

The time of the forward pass and gradient computations is no more than two multiplications and two gradient computations, which is exactly two times what we measured in Section 4.1. We re-used those measurements, and add the time it takes to compute the matrix operation, e.g., .

Over Estimating the Time of FastH.

The matrix exponential and the Cayley map require one orthogonal matrix instead of two, i.e., instead of . The WY decomposition then only needs to be computed for and not both and . By re-using the data, we measure the time of two orthogonal matrices, this thus estimates an upper-bound of the real running time of FastH.

Footnotes

  1. footnotemark:
  2. footnotemark:
  3. footnotemark:
  4. footnotemark:
  5. For the matrix exponential and the Cayley map we used the open-source implementation https://github.com/Lezcano/expRNN from Casado (2019). For the Householder decomposition, we used the open-source implementation https://github.com/zhangjiong724/spectral-RNN of the sequential and parallel algorithm from Zhang et al. (2018).

References

  1. The WY Representation for Products of Householder Matrices. SIAM Journal on Scientific and Statistical Computing. Cited by: §3.1.
  2. Trivializations for Gradient-Based Optimization on Manifolds. In NeurIPS, Cited by: §2.2, §4.1, Table 1, §8.2, footnote 1.
  3. NICE: Non-Linear Independent Components Estimation. In ICLR (Workshop), Cited by: §2.1, §5.
  4. Improving Normalizing Flows via Better Orthogonal Parameterizations. In ICML Workshop on Invertible Neural Networks and Normalizing Flows, Cited by: §2.1, §5.
  5. The Reversible Residual Network: Backpropagation Without Storing Activations. In NIPS, Cited by: §8.1.
  6. Emerging Convolutions for Generative Normalizing Flows. In ICML, Cited by: §1, §4.1, §5, §5, §7.
  7. Glow: generative Flow with Invertible 1x1 Convolutions. In NeurIPS, Cited by: §2.1, §3.3, §4.1, §5.
  8. Cheap Orthogonal Constraints in Neural Networks: a Simple Parametrization of the Orthogonal and Unitary Group. In ICML, Cited by: §2.1.
  9. Efficient Riemannian Optimization on the Stiefel Manifold via the Cayley Transform. In ICLR, Cited by: §2.2.
  10. Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections. In ICML, Cited by: What if Neural Networks had SVDs?, §2.2, §3.2, §5, §7.
  11. Spectral Normalization for Generative Adversarial Networks. In ICLR, Cited by: §2.1.
  12. Linear Algebra and its Applications. Cited by: §2.1.
  13. Improving Variational Auto-Encoders using Householder Flow. arXiv preprint. Cited by: §5, §7.
  14. Constructive Ways for Generating (Generalized) Real Orthogonal Matrices as Products of (Generalized) Symmetries. Linear Algebra and its Applications. Cited by: §2.2.
  15. Sylvester Normalizing Flows for Variational Inference. In UAI, Cited by: §4.1, §5, §7.
  16. Restructuring of Deep Neural Network Acoustic Models with Singular Value Decomposition.. Cited by: §2.1.
  17. Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization. In ICML, Cited by: What if Neural Networks had SVDs?, Figure 1, §1, §1, §1, §2.2, §2.2, §2.2, §3.2, §3.3, Figure 3, §4.1, §4.1, §4.2, §5, §5, §5, §7, §8.1, §8.1, §8.1, §8.2, §8.2, footnote 1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414579
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description