Finding structure with randomness:Probabilistic algorithms for constructing approximate matrix decompositions

# Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

N. Halko222Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309-0526. Supported by NSF awards #0748488 and #0610097.    P. G. Martinsson222Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309-0526. Supported by NSF awards #0748488 and #0610097.    J. A. Tropp333Computing & Mathematical Sciences, California Institute of Technology, MC 305-16, Pasadena, CA 91125-5000. Supported by ONR award #N000140810883.
###### Abstract

Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.

This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.

The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the dominant components of the singular value decomposition of an matrix. (i) For a dense input matrix, randomized algorithms require floating-point operations (flops) in contrast with for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.

Key words. Dimension reduction, eigenvalue decomposition, interpolative decomposition, Johnson–Lindenstrauss lemma, matrix approximation, parallel algorithm, pass-efficient algorithm, principal component analysis, randomized algorithm, random matrix, rank-revealing QR factorization, singular value decomposition, streaming algorithm.

AMS subject classifications. [MSC2010] Primary: 65F30. Secondary: 68W20, 60B20.

Part I: Introduction

## 1 Overview

On a well-known list of the “Top 10 Algorithms” that have influenced the practice of science and engineering during the 20th century [40], we find an entry that is not really an algorithm: the idea of using matrix factorizations to accomplish basic tasks in numerical linear algebra. In the accompanying article [127], Stewart explains that

The underlying principle of the decompositional approach to matrix computation is that it is not the business of the matrix algorithmicists to solve particular problems but to construct computational platforms from which a variety of problems can be solved.

Stewart goes on to argue that this point of view has had many fruitful consequences, including the development of robust software for performing these factorizations in a highly accurate and provably correct manner.

The decompositional approach to matrix computation remains fundamental, but developments in computer hardware and the emergence of new applications in the information sciences have rendered the classical algorithms for this task inadequate in many situations:

• A salient feature of modern applications, especially in data mining, is that the matrices are stupendously big. Classical algorithms are not always well adapted to solving the type of large-scale problems that now arise.

• In the information sciences, it is common that data are missing or inaccurate. Classical algorithms are designed to produce highly accurate matrix decompositions, but it seems profligate to spend extra computational resources when the imprecision of the data inherently limits the resolution of the output.

• Data transfer now plays a major role in the computational cost of numerical algorithms. Techniques that require few passes over the data may be substantially faster in practice, even if they require as many—or more—floating-point operations.

• As the structure of computer processors continues to evolve, it becomes increasingly important for numerical algorithms to adapt to a range of novel architectures, such as graphics processing units.

The purpose of this paper is make the case that randomized algorithms provide a powerful tool for constructing approximate matrix factorizations. These techniques are simple and effective, sometimes impressively so. Compared with standard deterministic algorithms, the randomized methods are often faster and—perhaps surprisingly—more robust. Furthermore, they can produce factorizations that are accurate to any specified tolerance above machine precision, which allows the user to trade accuracy for speed if desired. We present numerical evidence that these algorithms succeed for real computational problems.

In short, our goal is to demonstrate how randomized methods interact with classical techniques to yield effective, modern algorithms supported by detailed theoretical guarantees. We have made a special effort to help practitioners identify situations where randomized techniques may outperform established methods.

Throughout this article, we provide detailed citations to previous work on randomized techniques for computing low-rank approximations. The primary sources that inform our presentation include [17, 46, 58, 91, 105, 112, 113, 118, 137].

###### Remark 1.1

Our experience suggests that many practitioners of scientific computing view randomized algorithms as a desperate and final resort. Let us address this concern immediately. Classical Monte Carlo methods are highly sensitive to the random number generator and typically produce output with low and uncertain accuracy. In contrast, the algorithms discussed herein are relatively insensitive to the quality of randomness and produce highly accurate results. The probability of failure is a user-specified parameter that can be rendered negligible (say, less than ) with a nominal impact on the computational resources required.

### 1.1 Approximation by low-rank matrices

The roster of standard matrix decompositions includes the pivoted QR factorization, the eigenvalue decomposition, and the singular value decomposition (SVD), all of which expose the (numerical) range of a matrix. Truncated versions of these factorizations are often used to express a low-rank approximation of a given matrix:

 A≈BC,m×nm×kk×n. \hb@xt@.01(1.1)

The inner dimension is sometimes called the numerical rank of the matrix. When the numerical rank is much smaller than either dimension or , a factorization such as (LABEL:eq:lowrank) allows the matrix to be stored inexpensively and to be multiplied rapidly with vectors or other matrices. The factorizations can also be used for data interpretation or to solve computational problems, such as least squares.

Matrices with low numerical rank appear in a wide variety of scientific applications. We list only a few:

• A basic method in statistics and data mining is to compute the directions of maximal variance in vector-valued data by performing principal component analysis (PCA) on the data matrix. PCA is nothing other than a low-rank matrix approximation [71, §14.5].

• Another standard technique in data analysis is to perform low-dimensional embedding of data under the assumption that there are fewer degrees of freedom than the ambient dimension would suggest. In many cases, the method reduces to computing a partial SVD of a matrix derived from the data. See [71, §§14.8–14.9] or [30].

• The problem of estimating parameters from measured data via least-squares fitting often leads to very large systems of linear equations that are close to linearly dependent. Effective techniques for factoring the coefficient matrix lead to efficient techniques for solving the least-squares problem, [113].

• Many fast numerical algorithms for solving PDEs and for rapidly evaluating potential fields such as the fast multipole method [66] and -matrices [65], rely on low-rank approximations of continuum operators.

• Models of multiscale physical phenomena often involve PDEs with rapidly oscillating coefficients. Techniques for model reduction or coarse graining in such environments are often based on the observation that the linear transform that maps the input data to the requested output data can be approximated by an operator of low rank [56].

### 1.2 Matrix approximation framework

The task of computing a low-rank approximation to a given matrix can be split naturally into two computational stages. The first is to construct a low-dimensional subspace that captures the action of the matrix. The second is to restrict the matrix to the subspace and then compute a standard factorization (QR, SVD, etc.) of the reduced matrix. To be slightly more formal, we subdivide the computation as follows.

• Stage A: Compute an approximate basis for the range of the input matrix . In other words, we require a matrix for which

 Q has orthonormal columns and A≈QQ∗A. \hb@xt@.01(1.2)

We would like the basis matrix to contain as few columns as possible, but it is even more important to have an accurate approximation of the input matrix.

Stage B: Given a matrix that satisfies (LABEL:eqn:Q-form), we use to help compute a standard factorization (QR, SVD, etc.) of .

The task in Stage A can be executed very efficiently with random sampling methods, and these methods are the primary subject of this work. In the next subsection, we offer an overview of these ideas. The body of the paper provides details of the algorithms (§LABEL:sec:algorithm) and a theoretical analysis of their performance (§§LABEL:sec:lin-alg-prelimLABEL:sec:SRFTs).

Stage B can be completed with well-established deterministic methods. Section LABEL:sec:converting contains an introduction to these techniques, and §LABEL:sec:otherfactorizations shows how we apply them to produce low-rank factorizations.

At this point in the development, it may not be clear why the output from Stage A facilitates our job in Stage B. Let us illustrate by describing how to obtain an approximate SVD of the input matrix given a matrix that satisfies (LABEL:eqn:Q-form). More precisely, we wish to compute matrices and with orthonormal columns and a nonnegative, diagonal matrix such that . This goal is achieved after three simple steps:

• Form , which yields the low-rank factorization .

• Compute an SVD of the small matrix: .

• Set .

When has few columns, this procedure is efficient because we can easily construct the reduced matrix and rapidly compute its SVD. In practice, we can often avoid forming explicitly by means of subtler techniques. In some cases, it is not even necessary to revisit the input matrix during Stage B. This observation allows us to develop single-pass algorithms, which look at each entry of only once.

Similar manipulations readily yield other standard factorizations, such as the pivoted QR factorization, the eigenvalue decomposition, etc.

### 1.3 Randomized algorithms

This paper describes a class of randomized algorithms for completing Stage A of the matrix approximation framework set forth in §LABEL:sec:framework. We begin with some details about the approximation problem these algorithms target (§LABEL:sec:modelproblem). Afterward, we motivate the random sampling technique with a heuristic explanation (§LABEL:sec:intuition) that leads to a prototype algorithm (§LABEL:sec:proto-algorithm).

#### 1.3.1 Problem formulations

The basic challenge in producing low-rank matrix approximations is a primitive question that we call the fixed-precision approximation problem. Suppose we are given a matrix and a positive error tolerance . We seek a matrix with orthonormal columns such that

 ∥A−QQ∗A∥≤ε, \hb@xt@.01(1.3)

where denotes the operator norm. The range of is a -dimensional subspace that captures most of the action of , and we would like to be as small as possible.

The singular value decomposition furnishes an optimal answer to the fixed-precision problem [97]. Let denote the th largest singular value of . For each ,

 minrank(X)≤j∥A−X∥=σj+1. \hb@xt@.01(1.4)

One way to construct a minimizer is to choose , where the columns of are dominant left singular vectors of . Consequently, the minimal rank where (LABEL:eq:fixed_precision) holds equals the number of singular values of that exceed the tolerance .

To simplify the development of algorithms, it is convenient to assume that the desired rank is specified in advance. We call the resulting problem the fixed-rank approximation problem. Given a matrix , a target rank , and an oversampling parameter , we seek to construct a matrix with orthonormal columns such that

 ∥A−QQ∗A∥≈minrank(X)≤k∥A−X∥. \hb@xt@.01(1.5)

Although there exists a minimizer that solves the fixed rank problem for , the opportunity to use a small number of additional columns provides a flexibility that is crucial for the effectiveness of the computational methods we discuss.

We will demonstrate that algorithms for the fixed-rank problem can be adapted to solve the fixed-precision problem. The connection is based on the observation that we can build the basis matrix incrementally and, at any point in the computation, we can inexpensively estimate the residual error . Refer to §LABEL:sec:algorithm1 for the details of this reduction.

#### 1.3.2 Intuition

To understand how randomness helps us solve the fixed-rank problem, it is helpful to consider some motivating examples.

First, suppose that we seek a basis for the range of a matrix with exact rank . Draw a random vector , and form the product . For now, the precise distribution of the random vector is unimportant; just think of as a random sample from the range of . Let us repeat this sampling process times:

 y(i)=Aω(i),i=1,2,…,k. \hb@xt@.01(1.6)

Owing to the randomness, the set of random vectors is likely to be in general linear position. In particular, the random vectors form a linearly independent set and no linear combination falls in the null space of . As a result, the set of sample vectors is also linearly independent, so it spans the range of . Therefore, to produce an orthonormal basis for the range of , we just need to orthonormalize the sample vectors.

Now, imagine that where is a rank- matrix containing the information we seek and is a small perturbation. Our priority is to obtain a basis that covers as much of the range of as possible, rather than to minimize the number of basis vectors. Therefore, we fix a small number , and we generate samples

 y(i)=Aω(i)=Bω(i)+Eω(i),i=1,2,…,k+p. \hb@xt@.01(1.7)

The perturbation shifts the direction of each sample vector outside the range of , which can prevent the span of from covering the entire range of . In contrast, the enriched set of samples has a much better chance of spanning the required subspace.

Just how many extra samples do we need? Remarkably, for certain types of random sampling schemes, the failure probability decreases superexponentially with the oversampling parameter ; see (LABEL:eq:intro_err_prob). As a practical matter, setting or often gives superb results. This observation is one of the principal facts supporting the randomized approach to numerical linear algebra.

#### 1.3.3 A prototype algorithm

The intuitive approach of §LABEL:sec:intuition can be applied to general matrices. Omitting computational details for now, we formalize the procedure in the figure labeled Proto-Algorithm.

This simple algorithm is by no means new. It is essentially the first step of a subspace iteration with a random initial subspace [61, §7.3.2]. The novelty comes from the additional observation that the initial subspace should have a slightly higher dimension than the invariant subspace we are trying to approximate. With this revision, it is often the case that no further iteration is required to obtain a high-quality solution to (LABEL:eq:fixed_rank). We believe this idea can be traced to [118, 91, 105].

In order to invoke the proto-algorithm with confidence, we must address several practical and theoretical issues:

• What random matrix should we use? How much oversampling do we need?

• The matrix is likely to be ill-conditioned. How do we orthonormalize its columns to form the matrix ?

• What are the computational costs?

• How can we solve the fixed-precision problem (LABEL:eq:fixed_precision) when the numerical rank of the matrix is not known in advance?

• How can we use the basis to compute other matrix factorizations?

• Does the randomized method work for problems of practical interest? How does its speed/accuracy/robustness compare with standard techniques?

• What error bounds can we expect? With what probability?

The next few sections provide a summary of the answers to these questions. We describe several problem regimes where the proto-algorithm can be implemented efficiently, and we present a theorem that describes the performance of the most important instantiation. Finally, we elaborate on how these ideas can be applied to approximate the truncated SVD of a large data matrix. The rest of the paper contains a more exhaustive treatment—including pseudocode, numerical experiments, and a detailed theory.

### 1.4 A comparison between randomized and traditional techniques

To select an appropriate computational method for finding a low-rank approximation to a matrix, the practitioner must take into account the properties of the matrix. Is it dense or sparse? Does it fit in fast memory or is it stored out of core? Does the singular spectrum decay quickly or slowly? The behavior of a numerical linear algebra algorithm may depend on all these factors [13, 61, 132]. To facilitate a comparison between classical and randomized techniques, we summarize their relative performance in each of three representative environments. Section LABEL:sec:costs contains a more in-depth treatment.

We focus on the task of computing an approximate SVD of an matrix with numerical rank . For randomized schemes, Stage A generally dominates the cost of Stage B in our matrix approximation framework (§LABEL:sec:framework). Within Stage A, the computational bottleneck is usually the matrix–matrix product in Step 2 of the proto-algorithm (§LABEL:sec:proto-algorithm). The power of randomized algorithms stems from the fact that we can reorganize this matrix multiplication for maximum efficiency in a variety of computational architectures.

#### 1.4.1 A general dense matrix that fits in fast memory

A standard deterministic technique for computing an approximate SVD is to perform a rank-revealing QR factorization of the matrix, and then to manipulate the factors to obtain the final decomposition. The cost of this approach is typically floating-point operations, or flops, although these methods require slightly longer running times in rare cases [68].

In contrast, randomized schemes can produce an approximate SVD using only flops. The gain in asymptotic complexity is achieved by using a random matrix that has some internal structure, which allows us to evaluate the product rapidly. For example, randomizing and subsampling the discrete Fourier transform works well. Sections LABEL:sec:ailonchazelle and LABEL:sec:SRFTs contain more information on this approach.

#### 1.4.2 A matrix for which matrix–vector products can be evaluated rapidly

When the matrix is sparse or structured, we may be able to apply it rapidly to a vector. In this case, the classical prescription for computing a partial SVD is to invoke a Krylov subspace method, such as the Lanczos or Arnoldi algorithm. It is difficult to summarize the computational cost of these methods because their performance depends heavily on properties of the input matrix and on the amount of effort spent to stabilize the algorithm. (Inherently, the Lanczos and Arnoldi methods are numerically unstable.) For the same reasons, the error analysis of such schemes is unsatisfactory in many important environments.

At the risk of being overly simplistic, we claim that the typical cost of a Krylov method for approximating the leading singular vectors of the input matrix is proportional to , where denotes the cost of a matrix–vector multiplication with the input matrix and the constant of proportionality is small. We can also apply randomized methods using a Gaussian test matrix to complete the factorization at the same cost, flops.

With a given budget of floating-point operations, Krylov methods sometimes deliver a more accurate approximation than randomized algorithms. Nevertheless, the methods described in this survey have at least two powerful advantages over Krylov methods. First, the randomized schemes are inherently stable, and they come with very strong performance guarantees that do not depend on subtle spectral properties of the input matrix. Second, the matrix–vector multiplies required to form can be performed in parallel. This fact allows us to restructure the calculations to take full advantage of the computational platform, which can lead to dramatic accelerations in practice, especially for parallel and distributed machines.

A more detailed comparison or randomized schemes and Krylov subspace methods is given in §LABEL:sec:fastmatvec.

#### 1.4.3 A general dense matrix stored in slow memory or streamed

When the input matrix is too large to fit in core memory, the cost of transferring the matrix from slow memory typically dominates the cost of performing the arithmetic. The standard techniques for low-rank approximation described in §LABEL:sec:intro_fits_in_RAM require passes over the matrix, which can be prohibitively expensive.

In contrast, the proto-algorithm of §LABEL:sec:proto-algorithm requires only one pass over the data to produce the approximate basis for Stage A of the approximation framework. This straightforward approach, unfortunately, is not accurate enough for matrices whose singular spectrum decays slowly, but we can address this problem using very few (say, to ) additional passes over the data [112]. See §LABEL:sec:pca or §LABEL:sec:powerscheme for more discussion.

Typically, Stage B uses one additional pass over the matrix to construct the approximate SVD. With slight modifications, however, the two-stage randomized scheme can be revised so that it only makes a single pass over the data. Refer to §LABEL:sec:onepass for information.

### 1.5 Performance analysis

A principal goal of this paper is to provide a detailed analysis of the performance of the proto-algorithm described in §LABEL:sec:proto-algorithm. This investigation produces precise error bounds, expressed in terms of the singular values of the input matrix. Furthermore, we determine how several choices of the random matrix impact the behavior of the algorithm.

Let us offer a taste of this theory. The following theorem describes the average-case behavior of the proto-algorithm with a Gaussian test matrix, assuming we perform the computation in exact arithmetic. This result is a simplified version of Theorem LABEL:thm:avg-spec-error-gauss.

###### Theorem 1.1

Suppose that is a real matrix. Select a target rank and an oversampling parameter , where . Execute the proto-algorithm with a standard Gaussian test matrix to obtain an matrix with orthonormal columns. Then

 E∥A−QQ∗A∥≤[1+4√k+pp−1⋅√min{m,n}]σk+1, \hb@xt@.01(1.8)

where denotes expectation with respect to the random test matrix and is the th singular value of .

We recall that the term appearing in (LABEL:eq:intro_err_bd) is the smallest possible error (LABEL:eqn:mirsky) achievable with any basis matrix . The theorem asserts that, on average, the algorithm produces a basis whose error lies within a small polynomial factor of the theoretical minimum. Moreover, the error bound (LABEL:eq:intro_err_bd) in the randomized algorithm is slightly sharper than comparable bounds for deterministic techniques based on rank-revealing QR algorithms [68].

The reader might be worried about whether the expectation provides a useful account of the approximation error. Fear not: the actual outcome of the algorithm is almost always very close to the typical outcome because of measure concentration effects. As we discuss in §LABEL:sec:prob-failure, the probability that the error satisfies

 ∥A−QQ∗A∥≤[1+11√k+p⋅√min{m,n}]σk+1 \hb@xt@.01(1.9)

is at least under very mild assumptions on . This fact justifies the use of an oversampling term as small as . This simplified estimate is very similar to the major results in [91].

The theory developed in this paper provides much more detailed information about the performance of the proto-algorithm.

• When the singular values of decay slightly, the error does not depend on the dimensions of the matrix (§§LABEL:sec:gauss-avg-caseLABEL:sec:prob-failure).

• We can reduce the size of the bracket in the error bound (LABEL:eq:intro_err_bd) by combining the proto-algorithm with a power iteration (§LABEL:sec:avg-power-method). For an example, see §LABEL:sec:pca below.

• For the structured random matrices we mentioned in §LABEL:sec:intro_fits_in_RAM, related error bounds are in force (§LABEL:sec:SRFTs).

• We can obtain inexpensive a posteriori error estimates to verify the quality of the approximation (§LABEL:sec:aposteriori).

### 1.6 Example: Randomized SVD

We conclude this introduction with a short discussion of how these ideas allow us to perform an approximate SVD of a large data matrix, which is a compelling application of randomized matrix approximation [112].

The two-stage randomized method offers a natural approach to SVD computations. Unfortunately, the simplest version of this scheme is inadequate in many applications because the singular spectrum of the input matrix may decay slowly. To address this difficulty, we incorporate steps of a power iteration, where or usually suffices in practice. The complete scheme appears in the box labeled Prototype for Randomized SVD. For most applications, it is important to incorporate additional refinements, as we discuss in §§LABEL:sec:algorithmLABEL:sec:otherfactorizations.

The Randomized SVD procedure requires only passes over the matrix, so it is efficient even for matrices stored out-of-core. The flop count satisfies

 TrandSVD=(2q+2)kTmult+O(k2(m+n)),

where is the flop count of a matrix–vector multiply with or . We have the following theorem on the performance of this method in exact arithmetic, which is a consequence of Corollary LABEL:cor:power-method-spec-gauss.

###### Theorem 1.2

Suppose that is a real matrix. Select an exponent and a target number of singular vectors, where . Execute the Randomized SVD algorithm to obtain a rank- factorization . Then

 E∥A−UΣV∗∥≤⎡⎣1+4√2min{m,n}k−1⎤⎦1/(2q+1)σk+1, \hb@xt@.01(1.10)

where denotes expectation with respect to the random test matrix and is the th singular value of .

This result is new. Observe that the bracket in (LABEL:eq:intro_pca_bd) is essentially the same as the bracket in the basic error bound (LABEL:eq:intro_err_bd). We find that the power iteration drives the leading constant to one exponentially fast as the power increases. The rank- approximation of can never achieve an error smaller than , so the randomized procedure computes approximate singular vectors that capture as much of the matrix as the first actual singular vectors.

In practice, we can truncate the approximate SVD, retaining only the first singular values and vectors. Equivalently, we replace the diagonal factor by the matrix formed by zeroing out all but the largest entries of . For this truncated SVD, we have the error bound

 E∥∥A−UΣ(k)V∗∥∥≤σk+1+⎡⎣1+4√2min{m,n}k−1⎤⎦1/(2q+1)σk+1. \hb@xt@.01(1.11)

In words, we pay no more than an additive term when we perform the truncation step. Our numerical experience suggests that the error bound (LABEL:eqn:pca_trunc) is pessimistic. See Remark LABEL:rem:truncation and §LABEL:sec:truncation-analysis for some discussion of truncation.

### 1.7 Outline of paper

The paper is organized into three parts: an introduction (§§LABEL:sec:introLABEL:sec:LA_prel), a description of the algorithms (§§LABEL:sec:algorithmLABEL:sec:numerics), and a theoretical performance analysis (§§LABEL:sec:lin-alg-prelimLABEL:sec:SRFTs). The two latter parts commence with a short internal outline. Each part is more or less self-contained, and after a brief review of our notation in §§LABEL:sec:basic_defLABEL:sec:standard_factorizations, the reader can proceed to either the algorithms or the theory part.

## 2 Related work and historical context

Randomness has occasionally surfaced in the numerical linear algebra literature; in particular, it is quite standard to initialize iterative algorithms for constructing invariant subspaces with a randomly chosen point. Nevertheless, we believe that sophisticated ideas from random matrix theory have not been incorporated into classical matrix factorization algorithms until very recently. We can trace this development to earlier work in computer science and—especially—to probabilistic methods in geometric analysis. This section presents an overview of the relevant work. We begin with a survey of randomized methods for matrix approximation; then we attempt to trace some of the ideas backward to their sources.

### 2.1 Randomized matrix approximation

Matrices of low numerical rank contain little information relative to their apparent dimension owing to the linear dependency in their columns (or rows). As a result, it is reasonable to expect that these matrices can be approximated with far fewer degrees of freedom. A less obvious fact is that randomized schemes can be used to produce these approximations efficiently.

Several types of approximation techniques build on this idea. These methods all follow the same basic pattern:

1. Preprocess the matrix, usually to calculate sampling probabilities.

2. Take random samples from the matrix, where the term sample refers generically to a linear function of the matrix.

3. Postprocess the samples to compute a final approximation, typically with classical techniques from numerical linear algebra. This step may require another look at the matrix.

We continue with a description of the most common approximation schemes.

#### 2.1.1 Sparsification

The simplest approach to matrix approximation is the method of sparsification or the related technique of quantization. The goal of sparsification is to replace the matrix by a surrogate that contains far fewer nonzero entries. Quantization produces an approximation whose components are drawn from a (small) discrete set of values. These methods can be used to limit storage requirements or to accelerate computations by reducing the cost of matrix–vector and matrix–matrix multiplies [94, Ch. 6]. The manuscript [33] describes applications in optimization.

Sparsification typically involves very simple elementwise calculations. Each entry in the approximation is drawn independently at random from a distribution determined from the corresponding entry of the input matrix. The expected value of the random approximation equals the original matrix, but the distribution is designed so that a typical realization is much sparser.

The first method of this form was devised by Achlioptas and McSherry [2], who built on earlier work on graph sparsification due to Karger [75, 76]. Arora–Hazan–Kale presented a different sampling method in [7]. See [123, 60] for some recent work on sparsification.

#### 2.1.2 Column selection methods

A second approach to matrix approximation is based on the idea that a small set of columns describes most of the action of a numerically low-rank matrix. Indeed, classical existential results [117] demonstrate that every matrix contains a -column submatrix for which

 ∥∥A−CC†A∥∥≤√1+k(n−k)⋅∥∥A−A(k)∥∥, \hb@xt@.01(2.1)

where is a parameter, the dagger denotes the pseudoinverse, and is a best rank- approximation of . It is -hard to perform column selection by optimizing natural objective functions, such as the condition number of the submatrix [27]. Nevertheless, there are efficient deterministic algorithms, such as the rank-revealing QR method of [68], that can nearly achieve the error bound (LABEL:eqn:CSSP-rel).

There is a class of randomized algorithms that approach the fixed-rank approximation problem (LABEL:eq:fixed_rank) using this intuition. These methods first compute a sampling probability for each column, either using the squared Euclidean norms of the columns or their leverage scores. (Leverage scores reflect the relative importance of the columns to the action of the matrix; they can be calculated easily from the dominant right singular vectors of the matrix.) Columns are then selected randomly according to this distribution. Afterward, a postprocessing step is invoked to produce a more refined approximation of the matrix.

We believe that the earliest method of this form appeared in a 1998 paper of Frieze–Kannan–Vempala [57, 58]. This work was refined substantially in the papers [44, 43, 46]. The basic algorithm samples columns from a distribution related to the squared norms of the columns. This sampling step produces a small column submatrix whose range is aligned with the range of the input matrix. The final approximation is obtained from a truncated SVD of the submatrix. Given a target rank and a parameter , this approach samples columns of the matrix to produce a rank- approximation that satisfies

 ∥A−B∥F≤∥∥A−A(k)∥∥F+ε∥A∥F, \hb@xt@.01(2.2)

where denotes the Frobenius norm. We note that the algorithm of [46] requires only a constant number of passes over the data.

Rudelson and Vershynin later showed that the same type of column sampling method also yields spectral-norm error bounds [116]. The techniques in their paper have been very influential; their work has found other applications in randomized regression [52], sparse approximation [133], and compressive sampling [19].

Deshpande et al. [37, 38] demonstrated that the error in the column sampling approach can be improved by iteration and adaptive volume sampling. They showed that it is possible to produce a rank- matrix that satisfies

 ∥A−B∥F≤(1+ε)∥∥A−A(k)∥∥F \hb@xt@.01(2.3)

using a -pass algorithm. Around the same time, Har-Peled [70] independently developed a recursive algorithm that offers the same approximation guarantees. Very recently, Desphande and Rademacher have improved the running time of volume-based sampling methods [36].

Drineas et al. and Boutsidis et al. have also developed randomized algorithms for the column subset selection problem, which requests a column submatrix that achieves a bound of the form (LABEL:eqn:CSSP-rel). Via the methods of Rudelson and Vershynin [116], they showed that sampling columns according to their leverage scores is likely to produce the required submatrix [50, 51]. Subsequent work [17, 18] showed that postprocessing the sampled columns with a rank-revealing QR algorithm can reduce the number of output columns required (LABEL:eqn:CSSP-rel). The argument in [17] explicitly decouples the linear algebraic part of the analysis from the random matrix theory. The theoretical analysis in the present work involves a very similar technique.

#### 2.1.3 Approximation by dimension reduction

A third approach to matrix approximation is based on the concept of dimension reduction. Since the rows of a low-rank matrix are linearly dependent, they can be embedded into a low-dimensional space without altering their geometric properties substantially. A random linear map provides an efficient, nonadaptive way to perform this embedding. (Column sampling can also be viewed as an adaptive form of dimension reduction.)

The proto-algorithm we set forth in §LABEL:sec:proto-algorithm is simply a dual description of the dimension reduction approach: collecting random samples from the column space of the matrix is equivalent to reducing the dimension of the rows. No precomputation is required to obtain the sampling distribution, but the sample itself takes some work to collect. Afterward, we orthogonalize the samples as preparation for constructing various matrix approximations.

We believe that the idea of using dimension reduction for algorithmic matrix approximation first appeared in a 1998 paper of Papadimitriou et al. [104, 105], who described an application to latent semantic indexing (LSI). They suggested projecting the input matrix onto a random subspace and compressing the original matrix to (a subspace of) the range of the projected matrix. They established error bounds that echo the result (LABEL:eqn:CSSP-abs) of Frieze et al. [58]. Although the Euclidean column selection method is a more computationally efficient way to obtain this type of error bound, dimension reduction has other advantages, e.g., in terms of accuracy.

Sarlós argued in [118] that the computational costs of dimension reduction can be reduced substantially by means of the structured random maps proposed by Ailon–Chazelle [3]. Sarlós used these ideas to develop efficient randomized algorithms for least-squares problems; he also studied approximate matrix multiplication and low-rank matrix approximation. The recent paper [102] analyzes a very similar matrix approximation algorithm using Rudelson and Vershynin’s methods [116].

The initial work of Sarlós on structured dimension reduction did not immediately yield algorithms for low-rank matrix approximation that were superior to classical techniques. Woolfe et al. showed how to obtain an improvement in asymptotic computational cost, and they applied these techniques to problems in scientific computing [137]. Related work includes [86, 88].

Martinsson–Rokhlin–Tygert have studied dimension reduction using a Gaussian transform matrix, and they demonstrated that this approach performs much better than earlier analyses had suggested [91]. Their work highlights the importance of oversampling, and their error bounds are very similar to the estimate (LABEL:eq:intro_err_prob) we presented in the introduction. They also demonstrated that dimension reduction can be used to compute an interpolative decomposition of the input matrix, which is essentially equivalent to performing column subset selection.

Rokhlin–Szlam–Tygert have shown that combining dimension reduction with a power iteration is an effective way to improve its performance [112]. These ideas lead to very efficient randomized methods for large-scale PCA [69]. An efficient, numerically stable version of the power iteration is discussed in §LABEL:sec:powerscheme, as well as [92]. Related ideas appear in a paper of Roweis [114].

Very recently, Clarkson and Woodruff [29] have developed one-pass algorithms for performing low-rank matrix approximation, and they have established lower bounds which prove that many of their algorithms have optimal or near-optimal resource guarantees, modulo constants.

#### 2.1.4 Approximation by submatrices

The matrix approximation literature contains a subgenre that discusses methods for building an approximation from a submatrix and computed coefficient matrices. For example, we can construct an approximation using a subcollection of columns (the interpolative decomposition), a subcollection of rows and a subcollection of columns (the CUR decomposition), or a square submatrix (the matrix skeleton). This type of decomposition was developed and studied in several papers, including [26, 64, 126]. For data analysis applications, see the recent paper [89].

A number of works develop randomized algorithms for this class of matrix approximations. Drineas et al. have developed techniques for computing CUR decompositions, which express , where and denote small column and row submatrices of and where is a small linkage matrix. These methods identify columns (rows) that approximate the range (corange) of the matrix; the linkage matrix is then computed by solving a small least-squares problem. A randomized algorithm for CUR approximation with controlled absolute error appears in [47]; a relative error algorithm appears in [51]. We also mention a paper on computing a closely related factorization called the compact matrix decomposition [129].

It is also possible to produce interpolative decompositions and matrix skeletons using randomized methods, as discussed in [91, 112] and §LABEL:sec:postrows of the present work.

#### 2.1.5 Other numerical problems

The literature contains a variety of other randomized algorithms for solving standard problems in and around numerical linear algebra. We list some of the basic references.

Tensor skeletons.

Randomized column selection methods can be used to produce CUR-type decompositions of higher-order tensors [49].

Matrix multiplication.

Column selection and dimension reduction techniques can be used to accelerate the multiplication of rank-deficient matrices [45, 118]. See also [10].

Overdetermined linear systems.

The randomized Kaczmarz algorithm is a linearly convergent iterative method that can be used to solve overdetermined linear systems [101, 128].

Overdetermined least squares.

Fast dimension-reduction maps can sometimes accelerate the solution of overdetermined least-squares problems [52, 118].

Nonnegative least squares.

Fast dimension reduction maps can be used to reduce the size of nonnegative least-squares problems [16].

Preconditioned least squares.

Randomized matrix approximations can be used to precondition conjugate gradient to solve least-squares problems [113].

Other regression problems.

Randomized algorithms for regression are described in [28]. Regression in for has also been considered [31].

Facility location.

The Fermat–Weber facility location problem can be viewed as matrix approximation with respect to a different discrepancy measure. Randomized algorithms for this type of problem appear in [121].

#### 2.1.6 Compressive sampling

Although randomized matrix approximation and compressive sampling are based on some common intuitions, it is facile to consider either one as a subspecies of the other. We offer a short overview of the field of compressive sampling—especially the part connected with matrices—so we can highlight some of the differences.

The theory of compressive sampling starts with the observation that many types of vector-space data are compressible. That is, the data are approximated well using a short linear combination of basis functions drawn from a fixed collection [42]. For example, natural images are well approximated in a wavelet basis; numerically low-rank matrices are well approximated as a sum of rank-one matrices. The idea behind compressive sampling is that suitably chosen random samples from this type of compressible object carry a large amount of information. Furthermore, it is possible to reconstruct the compressible object from a small set of these random samples, often by solving a convex optimization problem. The initial discovery works of Candès–Romberg–Tao [20] and Donoho [41] were written in 2004.

The earliest work in compressive sampling focused on vector-valued data; soon after, researchers began to study compressive sampling for matrices. In 2007, Recht–Fazel–Parillo demonstrated that it is possible to reconstruct a rank-deficient matrix from Gaussian measurements [111]. More recently, Candès–Recht [22] and Candès–Tao [23] considered the problem of completing a low-rank matrix from a random sample of its entries.

The usual goals of compressive sampling are (i) to design a method for collecting informative, nonadaptive data about a compressible object and (ii) to reconstruct a compressible object given some measured data. In both cases, there is an implicit assumption that we have limited—if any—access to the underlying data.

In the problem of matrix approximation, we typically have a complete representation of the matrix at our disposal. The point is to compute a simpler representation as efficiently as possible under some operational constraints. In particular, we would like to perform as little computation as we can, but we are usually allowed to revisit the input matrix. Because of the different focus, randomized matrix approximation algorithms require fewer random samples from the matrix and use fewer computational resources than compressive sampling reconstruction algorithms.

### 2.2 Origins

This section attempts to identify some of the major threads of research that ultimately led to the development of the randomized techniques we discuss in this paper.

#### 2.2.1 Random embeddings

The field of random embeddings is a major precursor to randomized matrix approximation. In a celebrated 1984 paper [74], Johnson and Lindenstrauss showed that the pairwise distances among a collection of points in a Euclidean space are approximately maintained when the points are mapped randomly to a Euclidean space of dimension . In other words, random embeddings preserve Euclidean geometry. Shortly afterward, Bourgain showed that appropriate random low-dimensional embeddings preserve the geometry of point sets in finite-dimensional spaces [15].

These observations suggest that we might be able to solve some computational problems of a geometric nature more efficiently by translating them into a lower-dimensional space and solving them there. This idea was cultivated by the theoretical computer science community beginning in the late 1980s, with research flowering in the late 1990s. In particular, nearest-neighbor search can benefit from dimension-reduction techniques [73, 78, 80]. The papers [57, 104] were apparently the first to apply this approach to linear algebra.

Around the same time, researchers became interested in simplifying the form of dimension reduction maps and improving the computational cost of applying the map. Several researchers developed refined results on the performance of a Gaussian matrix as a linear dimension reduction map [32, 73, 93]. Achlioptas demonstrated that discrete random matrices would serve nearly as well [1]. In 2006, Ailon and Chazelle proposed the fast Johnson–Lindenstrauss transform [3], which combines the speed of the FFT with the favorable embedding properties of a Gaussian matrix. Subsequent refinements appear in [4, 87]. Sarlós then imported these techniques to study several problems in numerical linear algebra, which has led to some of the fastest algorithms currently available [88, 137].

#### 2.2.2 Data streams

Muthukrishnan argues that a distinguishing feature of modern data is the manner in which it is presented to us. The sheer volume of information and the speed at which it must be processed tax our ability to transmit the data elsewhere, to compute complicated functions on the data, or to store a substantial part of the data [100, §3]. As a result, computer scientists have started to develop algorithms that can address familiar computational problems under these novel constraints. The data stream phenomenon is one of the primary justifications cited by [45] for developing pass-efficient methods for numerical linear algebra problems, and it is also the focus of the recent treatment [29].

One of the methods for dealing with massive data sets is to maintain sketches, which are small summaries that allow functions of interest to be calculated. In the simplest case, a sketch is simply a random projection of the data, but it might be a more sophisticated object [100, §5.1]. The idea of sketching can be traced to the work of Alon et al. [6, 5].

#### 2.2.3 Numerical linear algebra

Classically, the field of numerical linear algebra has focused on developing deterministic algorithms that produce highly accurate matrix approximations with provable guarantees. Nevertheless, randomized techniques have appeared in several environments.

One of the original examples is the use of random models for arithmetical errors, which was pioneered by von Neumann and Goldstine. Their papers [135, 136] stand among the first works to study the properties of random matrices. The earliest numerical linear algebra algorithm that depends essentially on randomized techniques is probably Dixon’s method for estimating norms and condition numbers [39].

Another situation where randomness commonly arises is the initialization of iterative methods for computing invariant subspaces. For example, most numerical linear algebra texts advocate random selection of the starting vector for the power method because it ensures that the vector has a nonzero component in the direction of a dominant eigenvector. Woźniakowski and coauthors have analyzed the performance of the power method and the Lanczos iteration given a random starting vector [79, 85].

Among other interesting applications of randomness, we mention the work by Parker and Pierce, which applies a randomized FFT to eliminate pivoting in Gaussian elimination [106], work by Demmel et al. who have studied randomization in connection with the stability of fast methods for linear algebra [35], and work by Le and Parker utilizing randomized methods for stabilizing fast linear algebraic computations based on recursive algorithms, such as Strassen’s matrix multiplication [81].

#### 2.2.4 Scientific computing

One of the first algorithmic applications of randomness is the method of Monte Carlo integration introduced by Von Neumann and Ulam [95], and its extensions, such as the Metropolis algorithm for simulations in statistical physics. (See [9] for an introduction.) The most basic technique is to estimate an integral by sampling points from the measure and computing an empirical mean of the integrand evaluated at the sample locations:

 ∫f(x)dμ(x)≈1mm∑i=1f(Xi),

where are independent and identically distributed according to the probability measure . The law of large numbers (usually) ensures that this approach produces the correct result in the limit as . Unfortunately, the approximation error typically has a standard deviation of , and the method provides no certificate of success.

The disappointing computational profile of Monte Carlo integration seems to have inspired a distaste for randomized approaches within the scientific computing community. Fortunately, there are many other types of randomized algorithms—such as the ones in this paper—that do not suffer from the same shortcomings.

#### 2.2.5 Geometric functional analysis

There is one more character that plays a central role in our story: the probabilistic method in geometric analysis. Many of the algorithms and proof techniques ultimately come from work in this beautiful but recondite corner of mathematics.

Dvoretsky’s theorem [53] states (roughly) that every infinite-dimensional Banach space contains an -dimensional subspace whose geometry is essentially the same as an -dimensional Hilbert space, where is an arbitrary natural number. In 1971, V. D. Milman developed a striking proof of this result by showing that a random -dimensional subspace of an -dimensional Banach space has this property with exceedingly high probability, provided that is large enough [96]. Milman’s article debuted the concentration of measure phenomenon, which is a geometric interpretation of the classical idea that regular functions of independent random variables rarely deviate far from their mean. This work opened a new era in geometric analysis where the probabilistic method became a basic instrument.

Another prominent example of measure concentration is Kashin’s computation of the Gel’fand widths of the ball [77], subsequently refined in [59]. This work showed that a random -dimensional projection of the -dimensional ball has an astonishingly small Euclidean diameter: approximately . In contrast, a nonzero projection of the ball always has Euclidean diameter one. This basic geometric fact undergirds recent developments in compressive sampling [21].

We have already described a third class of examples: the randomized embeddings of Johnson–Lindenstrauss [74] and of Bourgain [15].

Finally, we mention Maurey’s technique of empirical approximation. The original work was unpublished; one of the earliest applications appears in [24, §1]. Although Maurey’s idea has not received as much press as the examples above, it can lead to simple and efficient algorithms for sparse approximation. For some examples in machine learning, consider [8, 84, 119, 110]

The importance of random constructions in the geometric analysis community has led to the development of powerful techniques for studying random matrices. Classical random matrix theory focuses on a detailed asymptotic analysis of the spectral properties of special classes of random matrices. In contrast, geometric analysts know methods for determining the approximate behavior of rather complicated finite-dimensional random matrices. See [34] for a fairly current survey article. We also mention the works of Rudelson [115] and Rudelson–Vershynin [116], which describe powerful tools for studying random matrices drawn from certain discrete distributions. Their papers are rooted deeply in the field of geometric functional analysis, but they reach out toward computational applications.

## 3 Linear algebraic preliminaries

This section summarizes the background we need for the detailed description of randomized algorithms in §§LABEL:sec:algorithmLABEL:sec:costs and the analysis in §§LABEL:sec:lin-alg-prelimLABEL:sec:SRFTs. We introduce notation in §LABEL:sec:basic_def, describe some standard matrix decompositions in §LABEL:sec:standard_factorizations, and briefly review standard techniques for computing matrix factorizations in §LABEL:sec:standard_techniques.

### 3.1 Basic definitions

The standard Hermitian geometry for is induced by the inner product

 ⟨x, y⟩=x⋅y=∑jxj¯¯¯yj.

The associated norm is

 ∥x∥2=⟨x, x⟩=∑j∣∣xj∣∣2.

We usually measure the magnitude of a matrix with the operator norm

 ∥A∥=maxx≠0∥Ax∥∥x∥,

which is often referred to as the spectral norm. The Frobenius norm is given by

 ∥A∥F=[∑jk∣∣ajk∣∣2]1/2.

The conjugate transpose, or adjoint, of a matrix is denoted . The important identities

 ∥A∥2=∥∥A∗A∥∥=∥∥AA∗∥∥

hold for each matrix .

We say that a matrix is orthonormal if its columns form an orthonormal set with respect to the Hermitian inner product. An orthonormal matrix preserves geometry in the sense that for every vector . A unitary matrix is a square orthonormal matrix, and an orthogonal matrix is a real unitary matrix. Unitary matrices satisfy the relations . Both the operator norm and the Frobenius norm are unitarily invariant, which means that

 ∥∥UAV∗∥∥=∥A∥and∥∥UAV∗∥∥F=∥A∥F

for every matrix and all orthonormal matrices and

We use the notation of [61] to denote submatrices. If is a matrix with entries , and if and are two index vectors, then the associated submatrix is expressed as

 A(I,J)=⎡⎢ ⎢⎣ai1,j1⋯ai1,jq⋮⋮aip,j1⋯aip,jq⎤⎥ ⎥⎦.

For column- and row-submatrices, we use the standard abbreviations

 A(:,J)=A([1,2,…,m],J),andA(I,:)=A(I,[1,2,…,n]).

### 3.2 Standard matrix factorizations

This section defines three basic matrix decompositions. Methods for computing them are described in §LABEL:sec:standard_techniques.

#### 3.2.1 The pivoted QR factorization

Each matrix of rank admits a decomposition

 A=QR,

where is an orthonormal matrix, and is a weakly upper-triangular matrix. That is, there exists a permutation of the numbers such that is upper triangular. Moreover, the diagonal entries of are weakly decreasing. See [61, §5.4.1] for details.

#### 3.2.2 The singular value decomposition (SVD)

Each matrix of rank admits a factorization

 A=UΣV∗,

where is an orthonormal matrix, is an orthonormal matrix, and is a nonnegative, diagonal matrix

 Σ=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣σ1σ2⋱σk⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

The numbers are called the singular values of . They are arranged in weakly decreasing order:

 σ1≥σ2≥⋯≥σk≥0.

The columns of and are called left singular vectors and right singular vectors, respectively.

Singular values are connected with the approximability of matrices. For each , the number equals the spectral-norm discrepancy between and an optimal rank- approximation [97]. That is,

 σj+1=min{∥A−B∥:B has % rank j}. \hb@xt@.01(3.1)

In particular, . See [61, §2.5.3 and §5.4.5] for additional details.

#### 3.2.3 The interpolative decomposition (ID)

Our final factorization identifies a collection of columns from a rank- matrix that span the range of . To be precise, we can compute an index set such that

 A=A(:,J)X,

where is a matrix that satisfies . Furthermore, no entry of has magnitude larger than two. In other words, this decomposition expresses each column of using a linear combination of fixed columns with bounded coefficients. Stable and efficient algorithms for computing the ID appear in the papers [26, 68].

It is also possible to compute a two-sided ID

 A=WA(J′,J)X,

where is an index set identifying of the rows of , and is an matrix that satisfies and whose entries are all bounded by two.

###### Remark 3.1

There always exists an ID where the entries in the factor have magnitude bounded by one. Known proofs of this fact are constructive, e.g., [103, Lem. 3.3], but they require us to find a collection of columns that has “maximum volume.” It is NP-hard to identify a subset of columns with this type of extremal property [27]. We find it remarkable that ID computations are possible as soon as the bound on is relaxed.

### 3.3 Techniques for computing standard factorizations

This section discusses some established deterministic techniques for computing the factorizations presented in §LABEL:sec:standard_factorizations. The material on pivoted QR and SVD can be located in any major text on numerical linear algebra, such as [61, 132]. References for the ID include [68, 26].

#### 3.3.1 Computing the full decomposition

It is possible to compute the full QR factorization or the full SVD of an matrix to double-precision accuracy with flops. Techniques for computing the SVD are iterative by necessity, but they converge so fast that we can treat them as finite for practical purposes.

#### 3.3.2 Computing partial decompositions

Suppose that an matrix has numerical rank , where is substantially smaller than and . In this case, it is possible to produce a structured low-rank decomposition that approximates the matrix well. Sections LABEL:sec:algorithm and LABEL:sec:otherfactorizations describe a set of randomized techniques for obtaining these partial decompositions. This section briefly reviews the classical techniques, which also play a role in developing randomized methods.

To compute a partial QR decomposition, the classical device is the Businger–Golub algorithm, which performs successive orthogonalization with pivoting on the columns of the matrix. The procedure halts when the Frobenius norm of the remaining columns is less than a computational tolerance . Letting denote the number of steps required, the process results in a partial factorization

 A=QR+E, \hb@xt@.01(3.2)

where is an orthonormal matrix, is a weakly upper-triangular matrix, and is a residual that satisfies . The computational cost is , and the number of steps taken is typically close to the minimal rank for which precision (in the Frobenius norm) is achievable. The Businger–Golub algorithm can in principle significantly overpredict the rank, but in practice this problem is very rare provided that orthonormality is maintained scrupulously.

Subsequent research has led to strong rank-revealing QR algorithms that succeed for all matrices. For example, the Gu–Eisenstat algorithm [68] (setting their parameter ) produces an QR decomposition of the form (LABEL:eq:QRE), where

 ∥E∥≤√1+4k(n−k)⋅σk+1.

Recall that is the minimal error possible in a rank- approximation [97]. The cost of the Gu–Eisenstat algorithm is typically , but it can be slightly higher in rare cases. The algorithm can also be used to obtain an approximate ID [26].

To compute an approximate SVD of a general matrix, the most straightforward technique is to compute the full SVD and truncate it. This procedure is stable and accurate, but it requires flops. A more efficient approach is to compute a partial QR factorization and postprocess the factors to obtain a partial SVD using the methods described below in §LABEL:sec:converting. This scheme takes only flops. Krylov subspace methods can also compute partial SVDs at a comparable cost of , but they are less robust.

Note that all the techniques described in this section require extensive random access to the matrix, and they can be very slow when the matrix is stored out-of-core.

#### 3.3.3 Converting from one partial factorization to another

Suppose that we have obtained a partial decomposition of a matrix by some means:

 ∥A−CB∥≤ε,

where and have rank . Given this information, we can efficiently compute any of the basic factorizations.

We construct a partial QR factorization using the following three steps:

• Compute a QR factorization of so that .

• Form the product , and compute a QR factorization: .

• Form the product .

The result is an orthonormal matrix and a weakly upper-triangular matrix such that .

An analogous technique yields a partial SVD:

• Compute a QR factorization of so that .

• Form the product , and compute an SVD: .

• Form the product .

The result is a diagonal matrix and orthonormal matrices and such that .

Converting and into a partial ID is a one-step process:

• Compute and such that .

Then , but the approximation error may deteriorate from the initial estimate. For example, if we compute the ID using the Gu–Eisenstat algorithm [68] with the parameter , then the error . Compare this bound with Lemma LABEL:lem:convert_ID below.

#### 3.3.4 Krylov-subspace methods

Suppose that the matrix can be applied rapidly to vectors, as happens when is sparse or structured. Then Krylov subspace techniques can very effectively and accurately compute partial spectral decompositions. For concreteness, assume that is Hermitian. The idea of these techniques is to fix a starting vector and to seek approximations to the eigenvectors within the corresponding Krylov subspace

 \@fontswitchVq(ω)=span{ω,Aω,A2ω,…,Aq−1ω}.

Krylov methods also come in blocked versions, in which the starting vector is replaced by a starting matrix . A common recommendation is to draw a starting vector (or starting matrix ) from a standardized Gaussian distribution, which indicates a significant overlap between Krylov methods and the methods in this paper.

The most basic versions of Krylov methods for computing spectral decompositions are numerically unstable. High-quality implementations require that we incorporate restarting strategies, techniques for maintaining high-quality bases for the Krylov subspaces, etc. The diversity and complexity of such methods make it hard to state a precise computational cost, but in the environment we consider in this paper, a typical cost for a fully stable implementation would be

 TKrylov∼kTmult+k2(m+n), \hb@xt@.01(3.3)

where is the cost of a matrix–vector multiplication.

Part II: Algorithms

This part of the paper, §§LABEL:sec:algorithmLABEL:sec:numerics, provides detailed descriptions of randomized algorithms for constructing low-rank approximations to matrices. As discussed in §LABEL:sec:framework, we split the problem into two stages. In Stage A, we construct a subspace that captures the action of the input matrix. In Stage B, we use this subspace to obtain an approximate factorization of the matrix.

Section LABEL:sec:algorithm develops randomized methods for completing Stage A, and §LABEL:sec:otherfactorizations describes deterministic methods for Stage B. Section LABEL:sec:costs compares the computational costs of the resulting two-stage algorithm with the classical approaches outlined in §LABEL:sec:LA_prel. Finally, §LABEL:sec:numerics illustrates the performance of the randomized schemes via numerical examples.

## 4 Stage A: Randomized schemes for approximating the range

This section outlines techniques for constructing a subspace that captures most of the action of a matrix. We begin with a recapitulation of the proto-algorithm that we introduced in §LABEL:sec:sketchofalgorithm. We discuss how it can be implemented in practice (§LABEL:sec:proto_revisited) and then consider the question of how many random samples to acquire (§LABEL:sec:howmuchoversampling). Afterward, we present several ways in which the basic scheme can be improved. Sections LABEL:sec:aposteriori and LABEL:sec:algorithm1 explain how to address the situation where the numerical rank of the input matrix is not known in advance. Section LABEL:sec:powerscheme shows how to modify the scheme to improve its accuracy when the singular spectrum of the input matrix decays slowly. Finally, §LABEL:sec:ailonchazelle describes how the scheme can be accelerated by using a structured random matrix.

### 4.1 The proto-algorithm revisited

The most natural way to implement the proto-algorithm from §LABEL:sec:sketchofalgorithm is to draw a random test matrix from the standard Gaussian distribution. That is, each entry of is an independent Gaussian random variable with mean zero and variance one. For reference, we formulate the resulting scheme as Algorithm LABEL:alg:basic.

The number of flops required by Algorithm LABEL:alg:basic satisfies

 Tbasic∼ℓnTrand+ℓTmult+ℓ2m \hb@xt@.01(4.1)

where is the cost of generating a Gaussian random number and is the cost of multiplying by a vector. The three terms in (LABEL:eq:cost_basic) correspond directly with the three steps of Algorithm LABEL:alg:basic.

Empirically, we have found that the performance of Algorithm LABEL:alg:basic depends very little on the quality of the random number generator used in Step 1.

The actual cost of Step 2 depends substantially on the matrix and the computational environment that we are working in. The estimate (LABEL:eq:cost_basic) suggests that Algorithm LABEL:alg:basic is especially efficient when the matrix–vector product can be evaluated rapidly. In particular, the scheme is appropriate for approximating sparse or structured matrices. Turn to §LABEL:sec:costs for more details.

The most important implementation issue arises when performing the basis calculation in Step 3. Typically, the columns of the sample matrix are almost linearly dependent, so it is imperative to use stable methods for performing the orthonormalization. We have found that the Gram–Schmidt procedure, augmented with the double orthogonalization described in [12], is both convenient and reliable. Methods based on Householder reflectors or Givens rotations also work very well. Note that very little is gained by pivoting because the columns of the random matrix are independent samples drawn from the same distribution.

### 4.2 The number of samples required

The goal of Algorithm LABEL:alg:basic is to produce an orthonormal matrix with few columns that achieves

 ∥(I−QQ∗)A∥≤ε, \hb@xt@.01(4.2)

where is a specified tolerance. The number of columns that the algorithm needs to reach this threshold is usually slightly larger than the minimal rank of the smallest basis that verifies (LABEL:eq:numsamperr). We refer to this discrepancy as the oversampling parameter. The size of the oversampling parameter depends on several factors:

The matrix dimensions.

Very large matrices may require more oversampling.

The singular spectrum.

The more rapid the decay of the singular values, the less oversampling is needed. In the extreme case that the matrix has exact rank , it is not necessary to oversample.

The random test matrix.

Gaussian matrices succeed with very little oversampling, but are not always the most cost-effective option. The structured random matrices discussed in §LABEL:sec:ailonchazelle may require substantial oversampling, but they still yield computational gains in certain settings.

The theoretical results in Part III provide detailed information about how the behavior of randomized schemes depends on these factors. For the moment, we limit ourselves to some general remarks on implementation issues.

For Gaussian test matrices, it is adequate to choose the oversampling parameter to be a small constant, such as or . There is rarely any advantage to select . This observation, first presented in [91], demonstrates that a Gaussian test matrix results in a negligible amount of extra computation.

In practice, the target rank is rarely known in advance. Randomized algorithms are usually implemented in an adaptive fashion where the number of samples is increased until the error satisfies the desired tolerance. In other words, the user never chooses the oversampling parameter. Theoretical results that bound the amount of oversampling are valuable primarily as aids for designing algorithms. We develop an adaptive approach in §§LABEL:sec:aposterioriLABEL:sec:algorithm1.

The computational bottleneck in Algorithm LABEL:alg:basic is usually the formation of the product . As a result, it often pays to draw a larger number of samples than necessary because the user can minimize the cost of the matrix multiplication with tools such as blocking of operations, high-level linear algebra subroutines, parallel processors, etc. This approach may lead to an ill-conditioned sample matrix , but the orthogonalization in Step 3 of Algorithm LABEL:alg:basic can easily identify the numerical rank of the sample matrix and ignore the excess samples. Furthermore, Stage B of the matrix approximation process succeeds even when the basis matrix has a larger dimension than necessary.

### 4.3 A posteriori error estimation

Algorithm LABEL:alg:basic is designed for solving the fixed-rank problem, where the target rank of the input matrix is specified in advance. To handle the fixed-precision problem, where the parameter is the computational tolerance, we need a scheme for estimating how well a putative basis matrix captures the action of the matrix . To do so, we develop a probabilistic error estimator. These methods are inspired by work of Dixon [39]; our treatment follows [88, 137].

The exact approximation error is . It is intuitively plausible that we can obtain some information about this quantity by computing , where is a standard Gaussian vector. This notion leads to the following method. Draw a sequence of standard Gaussian vectors, where is a small integer that balances computational cost and reliability. Then

 ∥(I−QQ∗)A∥≤10√2πmaxi=1,…,r∥∥(I−QQ∗)Aω(i)∥∥ \hb@xt@.01(4.3)

with probability at least . This statement follows by setting and in the following lemma, whose proof appears in [137, §3.4].

###### Lemma 4.1

Let be a real matrix. Fix a positive integer and a real number . Draw an independent family of standard Gaussian vectors. Then

except with probability .

The critical point is that the error estimate (LABEL:eq:errorest) is computationally inexpensive because it requires only a small number of matrix–vector products. Therefore, we can make a lowball guess for the numerical rank of and add more samples if the error estimate is too large. The asymptotic cost of Algorithm LABEL:alg:basic is preserved if we double our guess for the rank at each step. For example, we can start with 32 samples, compute another 32, then another 64, etc.

###### Remark 4.1

The estimate (LABEL:eq:errorest) is actually somewhat crude. We can obtain a better estimate at a similar computational cost by initializing a power iteration with a random vector and repeating the process several times [88].

The error estimate described in §LABEL:sec:aposteriori can be combined with any method for constructing an approximate basis for the range of a matrix. In this section, we explain how the error estimator can be incorporated into Algorithm LABEL:alg:basic at almost no additional cost.

To be precise, let us suppose that is an matrix and is a computational tolerance. We seek an integer and an orthonormal matrix such that

 ∥∥(I−Q(ℓ)(Q(ℓ))∗)A∥∥≤ε. \hb@xt@.01(4.4)

The size of the basis will typically be slightly larger than the size of the smallest basis that achieves this error.

The basic observation behind the adaptive scheme is that we can generate the basis in Step 3 of Algorithm LABEL:alg:basic incrementally. Starting with an empty basis matrix , the following scheme generates an orthonormal matrix whose range captures the action of : for Draw an Gaussian random vector , and set . Compute . Normalize , and form . end for How do we know when we have reached a basis that verifies (LABEL:eqn:err_est_err_bd)? The answer becomes apparent once we observe that the vectors are precisely the vectors that appear in the error bound (LABEL:eq:errorest). The resulting rule is that we break the loop once we observe consecutive vectors whose norms are smaller than .

A formal description of the resulting algorithm appears as Algorithm LABEL:alg:adaptive2. A potential complication of the method is that the vectors become small as the basis starts to capture most of the action of . In finite-precision arithmetic, their direction is extremely unreliable. To address this problem, we simply reproject the normalized vector onto in steps 7 and 8 of Algorithm LABEL:alg:adaptive2.

The CPU time requirements of Algorithms LABEL:alg:adaptive2 and LABEL:alg:basic are essentially identical. Although Algorithm LABEL:alg:adaptive2 computes the last few samples purely to obtain the error estimate, this apparent extra cost is offset by the fact that Algorithm LABEL:alg:basic always includes an oversampling factor. The failure probability stated for Algorithm LABEL:alg:adaptive2 is pessimistic because it is derived from a simple union bound argument. In practice, the error estimator is reliable in a range of circumstances when we take .

###### Remark 4.2

The calculations in Algorithm LABEL:alg:adaptive2 can be organized so that each iteration processes a block of samples simultaneously. This revision can lead to dramatic improvements in speed because it allows us to exploit higher-level linear algebra subroutines (e.g., BLAS3) or parallel processors. Although blocking can lead to the generation of unnecessary samples, this outcome is generally harmless, as noted in §LABEL:sec:howmuchoversampling.

### 4.5 A modified scheme for matrices whose singular values decay slowly

The techniques described in §LABEL:sec:proto_revisited and §LABEL:sec:algorithm1 work well for matrices whose singular values exhibit some decay, but they may produce a poor basis when the input matrix has a flat singular spectrum or when the input matrix is very large. In this section, we describe techniques, originally proposed in [67, 112], for improving the accuracy of randomized algorithms in these situations. Related earlier work includes [114] and the literature on classical orthogonal iteration methods [61, p. 332].

The intuition behind these techniques is that the singular vectors associated with small singular values interfere with the calculation, so we reduce their weight relative to the dominant singular vectors by taking powers of the matrix to be analyzed. More precisely, we wish to apply the randomized sampling scheme to the matrix , where is a small integer. The matrix has the same singular vectors as the input matrix , but its singular values decay much more quickly:

 σj(B)=σj(A)2q+1,j=1,2,3,…. \hb@xt@.01(4.5)

We modify Algorithm LABEL:alg:basic by replacing the formula in Step 2 by the formula , and we obtain Algorithm LABEL:alg:poweriteration.

Algorithm LABEL:alg:poweriteration requires times as many matrix–vector multiplies as Algorithm LABEL:alg:basic, but is far more accurate in situations where the singular values of decay slowly. A good heuristic is that when the original scheme produces a basis whose approximation error is within a factor of the optimum, the power scheme produces an approximation error within of the optimum. In other words, the power iteration drives the approximation gap to one exponentially fast. See Theorem LABEL:thm:power-method and §LABEL:sec:avg-power-method for the details.

Algorithm LABEL:alg:poweriteration targets the fixed-rank problem. To address the fixed-precision problem, we can incorporate the error estimators described in §LABEL:sec:aposteriori to obtain an adaptive scheme analogous with Algorithm LABEL:alg:adaptive2. In situations where it is critical to achieve near-optimal approximation errors, one can increase the oversampling beyond our standard recommendation all the way to without changing the scaling of the asymptotic computational cost. A supporting analysis appears in Corollary LABEL:cor:power-method-spec-gauss.

###### Remark 4.3

Unfortunately, when Algorithm LABEL:alg:poweriteration is executed in floating point arithmetic, rounding errors will extinguish all information pertaining to singular modes associated with singular values that are small compared with . (Roughly, if machine precision is , then all information associated with singular values smaller than is lost.) This problem can easily be remedied by orthonormalizing the columns of the sample matrix between each application of and . The resulting scheme, summarized as Algorithm LABEL:alg:subspaceiteration, is algebraically equivalent to Algorithm LABEL:alg:poweriteration when executed in exact arithmetic [124, 92]. We recommend Algorithm LABEL:alg:subspaceiteration because its computational costs are similar to Algorithm LABEL:alg:poweriteration, even though the former is substantially more accurate in floating-point arithmetic.

### 4.6 An accelerated technique for general dense matrices

This section describes a set of techniques that allow us to compute an approximate rank- factorization of a general dense matrix in roughly flops, in contrast to the asymptotic cost required by earlier methods. We can tailor this scheme for the real or complex case, but we focus on the conceptually simpler complex case. These algorithms were introduced in [137]; similar techniques were proposed in [118].

The first step toward this accelerated technique is to observe that the bottleneck in Algorithm LABEL:alg:basic is the computation of the matrix product . When the test matrix is standard Gaussian, the cost of this multiplication is , the same as a rank-revealing QR algorithm [68]. The key idea is to use a structured random matrix that allows us to compute the product in flops.

The subsampled random Fourier transform, or SRFT, is perhaps the simplest example of a structured random matrix that meets our goals. An SRFT is an matrix of the form

 Ω=√nℓDFR, \hb@xt@.01(4.6)

where

• is an diagonal matrix whose entries are independent random variables uniformly distributed on the complex unit circle,

• is the unitary discrete Fourier transform (DFT), whose entries take the values for , and

• is an matrix that samples coordinates from uniformly at random, i.e., its columns are drawn randomly without replacement from the columns of the identity matrix.

When is defined by (LABEL:eq:def_srft), we can compute the sample matrix using flops via a subsampled FFT [137]. Then we form the basis by orthonormalizing the columns of , as described in §LABEL:sec:proto_revisited. This scheme appears as Algorithm LABEL:alg:fastbasic. The total number of flops required by this procedure is

 Tstruct∼mnlog(ℓ)+ℓ2n \hb@xt@.01(4.7)

Note that if is substantially larger than the numerical rank of the input matrix, we can perform the orthogonalization with flops because the columns of the sample matrix are almost linearly dependent.

The test matrix (LABEL:eq:def_srft) is just one choice among many possibilities. Other suggestions that appear in the literature include subsampled Hadamard transforms, chains of Givens rotations acting on randomly chosen coordinates, and many more. See [86] and its bibliography. Empirically, we have found that the transform summarized in Remark LABEL:remark:random_givens below performs very well in a variety of environments [113].

At this point, it is not well understood how to quantify and compare the behavior of structured random transforms. One reason for this uncertainty is that it has been difficult to analyze the amount of oversampling that various transforms require. Section LABEL:sec:SRFTs establishes that the random matrix (LABEL:eq:def_srft) can be used to identify a near-optimal basis for a rank- matrix using samples. In practice, the transforms (LABEL:eq:def_srft) and (LABEL:eq:random_Givens) typically require no more oversampling than a Gaussian test matrix requires. (For a numerical example, see §LABEL:sec:num_SRFT.) As a consequence, setting or is typically more than adequate. Further research on these questions would be valuable.

###### Remark 4.4

The structured random matrices discussed in this section do not adapt readily to the fixed-precision problem, where the computational tolerance is specified, because the samples from the range are usually computed in bulk. Fortunately, these schemes are sufficiently inexpensive that we can progressively increase the number of samples computed starting with , say, and then proceeding to until we achieve the desired tolerance.

###### Remark 4.5

When using the SRFT (LABEL:eq:def_srft) for matrix approximation, we have a choice whether to use a subsampled FFT or a full FFT. The complete FFT is so inexpensive that it often pays to construct an extended sample matrix and then generate the actual samples by drawing columns at random from and rescaling as needed. The asymptotic cost increases to flops, but the full FFT is actually faster for moderate problem sizes because the constant suppressed by the big-O notation is so small. Adaptive rank determination is easy because we just examine extra samples as needed.

###### Remark 4.6

Among the structured random matrices that we have tried, one of the strongest candidates involves sequences of random Givens rotations [113]. This matrix takes the form

 Ω=D′′Θ′D′ΘDFR, \hb@xt@.01(4.8)

where the prime symbol indicates an independent realization of a random matrix. The matrices , , and are defined after (LABEL:eq:def_srft). The matrix is a chain of random Givens rotations:

 Θ=ΠG(1,2;θ1)G(2,3;θ2)⋯G(n−1,n;θn−1)

where is a random permutation matrix; where are independent random variables uniformly distributed on the interval ; and where denotes a rotation on by the angle in the coordinate plane [61, §5.1.8].

###### Remark 4.7

When the singular values of the input matrix decay slowly, Algorithm LABEL:alg:fastbasic may perform poorly in terms of accuracy. When randomized sampling is used with a Gaussian random matrix, the recourse is to take a couple of steps of a power iteration; see Algorithm LABEL:alg:subspaceiteration. However, it is not currently known whether such an iterative scheme can be accelerated to complexity using “fast” random transforms such as the SRFT.

## 5 Stage B: Construction of standard factorizations

The algorithms for Stage A described in §LABEL:sec:algorithm produce an orthonormal matrix whose range captures the action of an input matrix :

 ∥A−QQ∗A∥≤ε, \hb@xt@.01(5.1)

where is a computational tolerance. This section describes methods for approximating standard factorizations of using the information in the basis .

To accomplish this task, we pursue the idea from §LABEL:sec:converting that any low-rank factorization can be manipulated to produce a standard decomposition. When the bound (LABEL:eq:fixed_precision2) holds, the low-rank factors are simply and . The simplest scheme (§LABEL:sec:postSVD) computes the factor directly with a matrix–matrix product to ensure a minimal error in the final approximation. An alternative approach (§LABEL:sec:postrows) constructs factors and without forming any matrix–matrix product. The approach of §LABEL:sec:postrows is often faster than the approach of §LABEL:sec:postSVD but typically results in larger errors. Both schemes can be streamlined for an Hermitian input matrix (§LABEL:sec:postsym) and a positive semidefinite input matrix (§LABEL:sec:postpsd). Finally, we develop single-pass algorithms that exploit other information generated in Stage A to avoid revisiting the input matrix (§LABEL:sec:onepass).

Throughout this section, denotes an matrix, and is an orthonormal matrix that verifies (LABEL:eq:fixed_precision2). For purposes of exposition, we concentrate on methods for constructing the partial SVD.

### 5.1 Factorizations based on forming Q∗A directly

The relation (LABEL:eq:fixed_precision2) implies that , where . Once we have computed , we can produce any standard factorization using the methods of §LABEL:sec:converting. Algorithm LABEL:alg:Atranspose illustrates how to build an approximate SVD.

The factors produced by Algorithm LABEL:alg:Atranspose satisfy

 ∥A−UΣV∗∥≤ε. \hb@xt@.01(5.2)

In other words, the approximation error does not degrade.

The cost of Algorithm LABEL:alg:Atranspose is generally dominated by the cost of the product in Step 1, which takes flops for a general dense matrix. Note that this scheme is particularly well suited to environments where we have a fast method for computing the matrix–vector product , for example when is sparse or structured. This approach retains a strong advantage over Krylov-subspace methods and rank-revealing QR because Step 1 can be accelerated using BLAS3, parallel processors, and so forth. Steps 2 and 3 require and flops respectively.

###### Remark 5.1

Algorithm LABEL:alg:Atranspose produces an approximate SVD with the same rank as the basis matrix . When the size of the basis exceeds the desired rank of the SVD, it may be preferable to retain only the dominant singular values and singular vectors. Equivalently, we replace the diagonal matrix of computed singular values with the matrix formed by zeroing out all but the largest entries of . In the worst case, this truncation step can increase the approximation error by ; see §LABEL:sec:truncation-analysis for an analysis. Our numerical experience suggests that this error analysis is pessimistic, and the term often does not appear in practice.

### 5.2 Postprocessing via row extraction

Given a matrix such that (LABEL:eq:fixed_precision2) holds, we can obtain a rank- factorization

 A≈XB, \hb@xt@.01(5.3)

where is a matrix consisting of rows extracted from . The approximation (LABEL:eq:A_ID) can be produced without computing any matrix–matrix products, which makes this approach to postprocessing very fast. The drawback comes because the error is usually larger than the initial error , especially when the dimensions of are large. See Remark LABEL:remark:errors for more discussion.

To obtain the factorization (LABEL:eq:A_ID), we simply construct the interpolative decomposition (§LABEL:sec:ID) of the matrix :

 Q=XQ(J,:). \hb@xt@.01(5.4)

The index set marks rows of that span the row space of , and is an matrix whose entries are bounded in magnitude by two and contains the identity as a submatrix: . Combining (LABEL:eq:ID) and (LABEL:eq:fixed_precision2), we reach

 A≈QQ∗A=XQ(J,:)Q∗A. \hb@xt@.01(5.5)

Since , equation (LABEL:eq:saathoff) implies that