A Rigorous Extension of the Schönhage-Strassen Integer Multiplication Algorithm Using Complex Interval Arithmetic

# A Rigorous Extension of the Schönhage-Strassen Integer Multiplication Algorithm Using Complex Interval Arithmetic

Thomas Steinke
Department of Mathematics and Statistics University of Canterbury Christchurch, New Zealand
tas74@student.canterbury.ac.nz
Raazesh Sainudiin
Department of Mathematics and Statistics University of Canterbury Christchurch, New Zealand
r.sainudiin@math.canterbury.ac.nz
###### Abstract

Multiplication of -digit integers by long multiplication requires operations and can be time-consuming. In 1970 A. Schönhage and V. Strassen published an algorithm capable of performing the task with only arithmetic operations over ; naturally, finite-precision approximations to are used and rounding errors need to be accounted for. Overall, using variable-precision fixed-point numbers, this results in an -time algorithm. However, to make this algorithm more efficient and practical we need to make use of hardware-based floating-point numbers. How do we deal with rounding errors? and how do we determine the limits of the fixed-precision hardware? Our solution is to use interval arithmetic to guarantee the correctness of results and determine the hardware’s limits. We examine the feasibility of this approach and are able to report that 75,000-digit base-256 integers can be handled using double-precision containment sets. This clearly demonstrates that our approach has practical potential; however, at this stage, our implementation does not yet compete with commercial ones, but we are able to demonstrate the feasibility of this technique.

X. Zheng and N. Zhong (Eds.) Computability and Complexity in Analysis (CCA 2010) EPTCS 24, 2010, pp. A Rigorous Extension of the Schönhage-Strassen Integer Multiplication Algorithm Using Complex Interval ArithmeticLABEL:LastPage, doi:10.4204/EPTCS.24.19 © T. Steinke & R. Sainudiin This work is licensed under the Creative Commons Attribution License.

A Rigorous Extension of the Schönhage-Strassen Integer Multiplication Algorithm Using Complex Interval Arithmetic

Thomas Steinke
 Department of Mathematics and Statistics University of Canterbury Christchurch, New Zealand
tas74@student.canterbury.ac.nz and Raazesh Sainudiin
 Department of Mathematics and Statistics University of Canterbury Christchurch, New Zealand
r.sainudiin@math.canterbury.ac.nz

## 1 Introduction

Multiplication of very large integers is a crucial subroutine of many algorithms such as the RSA cryptosystem [8]. Consequently, much effort has gone into finding fast and reliable multiplication algorithms; [5] discusses several methods. The asymptotically-fastest known algorithm [2] requires steps, where is the iterated logarithm — defined as the number of times one has to repeatedly take the logarithm before the number is less than . However, being asymptotically-faster does not translate to being faster in practice. We shall concern ourselves with the practicalities of the subject; we will analyse our algorithm’s performance on a finite range of numbers.

The algorithm we are studying here is based on the first of two asymptotically fast multiplication algorithms by A. Schönhage and V. Strassen [9]. These algorithms are based on the convolution theorem and the fast Fourier transform. The first algorithm (the one we are studying) performs the discrete Fourier transform over using finite-precision approximations. The second algorithm uses the same ideas as the first, but it works over the finite ring rather than the uncountable field . We wish to point out that “the Schönhage-Strassen algorithm” usually refers to the second algorithm. However, in this document we use it to refer to the first -based algorithm.

From the theoretical viewpoint, the second algorithm is much nicer than the first. The second algorithm does not require the use of finite-precision approximations to . Also, the second algorithm requires steps to multiply two -bit numbers, making it asymptotically-faster than the first algorithm. However, the second algorithm is much more complicated than the first, and it is outperformed by asymptotically-slower algorithms, such as long multiplication, for small-to-medium input sizes. In practice, both of the Schönhage-Strassen algorithms are rarely used.

The first Schönhage-Strassen Algorithm is more elegant, if the finite-precision approximations are ignored. More importantly, it is faster in practice. Previous studies [3] have shown that the first algorithm can be faster than even highly-optimised implementations of the second. However, the first algorithm’s reliance on finite-precision approximations, despite exact answers being required, leads to it being discounted.

The saving grace of the Schönhage-Strassen algorithm is that at the end of the computation an integral result will be obtained. So the finite-precision approximations are rounded to integers. Thus, as long as rounding errors are sufficiently small for the rounding to be correct, an exact answer will be obtained. Shönhage and Strassen showed that fixed-point numbers with a variable precision of bits would be sufficient to achieve this.

For the Schönhage-Strassen algorithm to be practical, we need to make use of hardware-based floating-point numbers; software-based variable-precision numbers are simply too slow. However, we need to be able to manage the rounding errors. At the very least, we must be able to detect when the error is too large and more precision is needed. The usual approach to this is to prove some kind of worst-case error bound (for an example, see [7]). Then we can be sure that, for sufficiently small inputs, the algorithm will give correct results. However, worst-case bounds are rarely tight. We propose the use of dynamic error bounds using existing techniques from computer-aided proofs.

Dynamic error detection allows us to move beyond worst-case bounds. For example, using standard single-precision floating-point numbers, our naïve implementation of the Schönhage-Strassen algorithm sometimes gave an incorrect result when we tried multiplying two 120-digit base-256 numbers, but it usually gave correct results. Note that by a ‘naïve implementation’ we simply mean a modificiation of the Schönhage-Strassen algorithm that uses fixed-precison floating-point arithmetic and does not guarantee correctness. A worst-case bound would not allow us to use the algorithm in this case, despite it usually being correct. Dynamic error detection, however, would allow us to try the algorithm, and, in the rare instances where errors occur, it would inform us that we need to use more precision.

We will use complex interval containment sets for all complex arithmetic operations. This means that at the end of the computation, where we would ordinarily round to the nearest integer, we simply choose the unique integer in the containment set. If the containment set contains multiple integers, then we report an error. This rigorous extension of the Schönhage-Strassen algorithm therefore constitutes a computer-aided proof of the desired product. When an error is detected, we must increase the precision being used or we must use a different algorithm.

For those unfamiliar with the Schönhage-Strassen algorithm or with interval arithmetic, we describe these in section 2. Then, in section 3, we show the empirical results of our study. Section 4, our conclusion, briefly discusses the implications of our results.

## 2 The Algorithm

For the sake of completeness we explain the Schönhage-Strassen algorithm, as it is presented in [9]. We also explain how we have modified the algorithm using interval arithmetic in subsection 2.5. Those already familiar with the material may skip all or part of this section.

We make the convention that a positive integer is represented in base (usually for some ) as a vector ; the value of is

 x=n−1∑i=0xibi.

### 2.1 Basic Multiplication Algorithm

The above definition immediately leads to a formula for multiplication. Let and be positive integers with representations and . Then

 xy=(n−1∑i=0xibi)(m−1∑j=0yjbj)=n+m−2∑i=0min{n−1,i}∑j=max{0,i−m+1}xjyi−jbi=n+m−1∑i=0zibi.

Of course, we cannot simply set ; this may violate the constraint that for every . We must ‘carry’ the ‘overflow’. This leads to the long multiplication algorithm (see [5]).
The Long Multiplication Algorithm 1. Input: and 2. Output: # 3. Set . # = carry 4. For up to do { 5. Set . # = sum 6. For up to do { 7. Set . 8. }. 9. Set . 10. Set . 11. }. 12. # at the end.
This algorithm requires steps (for a fixed ). Close inspection of the long multiplication algorithm might suggest that steps are required as the sum can become very large. However, adding a bounded number () to an unbounded number () is, on average, a constant-time operation.

### 2.2 The Discrete Fourier Transform

The basis of the Schönhage-Strassen algorithm is the discrete Fourier transform and the convolution theorem. The discrete Fourier transform is a map from to . In this section we will define the discrete Fourier transform and we will show how it and its inverse can be calculated with complex additions and multiplications. See [5] for further details.

###### Definition 1 (Discrete Fourier Transform).

Let and let . Then define the discrete Fourier transform of by

 ^xi:=n−1∑j=0xjωij  (0≤i≤n−1).

There is nothing special about our choice of ; the consequences of the following lemma are all that we need to satisfy. Any other element of with the same properties would suffice.

###### Lemma 2.

Let and . Then

 ωn=1 and ωk≠1 for all 0

and, for all ,

 n−1∑i=0ωik=0.

Note that the case where is uninteresting, as and the discrete Fourier transform is the identity mapping .

Now we can prove that the discrete Fourier transform is a bijection.

###### Proposition 3 (Inverse Discrete Fourier Transform).

Let and let . Define by

 ˇxi:=1nn−1∑j=0xjω−ij  (0≤i≤n−1).

Then this defines the inverse of the discrete Fourier transform — that is, if , then .

Now we explain the fast Fourier transform; this is simply a fast algorithm for computing the discrete Fourier transform and its inverse.

Let be a power of and be given. Now define by

 (xeven)i=x2i,(xodd)i=x2i+1,

for all with .

Now the critical observation of the Cooley-Tukey fast Fourier transform algorithm is the following. Fix with and let . Then we have

 ^xi = n−1∑j=0xjωij = = (^xeven)i mod n/2+ωi(^xodd)i mod n/2.

Note that , so taking the modulus is justified. This observation leads to the following divide-and-conquer algorithm.
The Cooley-Tukey Fast Fourier Transform 1. Input: and 2. Output: 3. function FFT(, ) { 4. If , then . 5. Partition into . 6. Compute = FFT(, ) by recursion. 7. Compute = FFT(, ) by recursion. 8. Compute . 9. For up to do { 10. Set . 11. }. 12. }.
It is easy to show that this algorithm requires complex additions and multiplications. With very little modification we are also able to obtain a fast algorithm for computing the inverse discrete Fourier transform.

Note that, to compute , we can use the recurrence

 ω1=1,  ω2=−1,  ω4=i,  ω2n=1+ωn|1+ωn|  (n≥3),

where . Other efficient methods of computing are also available.

### 2.3 The Convolution Theorem

We start by defining the convolution. Let . We can interpret and as the coefficients of two polynomials — that is,

 fa(z)=a0+a1z+⋯+an−1zn−1.

The convolution of and — denoted by — is, for our purposes, the vector of coefficients obtained by multiplying the polynomials and . Thus we have for all . Note that we can add ‘padding zeroes’ to the end of the coefficient vectors without changing the corresponding polynomial.

The convolution theorem relates convolutions to Fourier transforms. We only use a restricted form.

###### Theorem 4 (Convolution Theorem).

Let and , where . Pad and by setting

 a′=(a0,a1,⋯,an−1,0,⋯,0),b′=(b0,b1,⋯,bn−1,0,⋯,0)∈Cm.

Then, for every with ,

 ^ci=^a′i^b′i.

The convolution theorem gives us a fast method of computing convolutions and, thus, of multiplying polynomials. Given , we can calculate using only arithmetic operations as follows.
The Fast Convolution Algorithm 1. Input: 2. Output: 3. Set and . 4. # Pad and so they are in . 5. Set . 6. Compute and . 7. For , set . 8. Compute .

### 2.4 The Schönhage-Strassen Algorithm

The Schönhage-Strassen algorithm multiplies two integers by convolving them and then performing carrys. Let two base- integer representations be and . We consider the digits as the coefficients of two polynomials. Then , and

 xy=fx(b)fy(b)=fx∗y(b).

So, to compute , we can first compute in steps and then we can evaluate . The evaluation of to yield an integer representation is simply the process of performing carrys.
The Schönhage-Strassen Algorithm 1. Input: and 2. Output: # 3. Compute using the fast convolution Algorithm. 4. Set . #carry 5. For up to do { 6. Set . 7. Set . 8. }. 9. Set .
Clearly the Schönhage-Strassen algorithm performs the multiplication using complex arithmetic operations.

When finite-precision complex arithmetic is done, rounding errors are introduced. However, this can be countered: We know that must be a vector of integers. As long as the rounding errors introduced are sufficiently small, we can round to the nearest integer and obtain the correct result. Schönhage and Strassen [9] proved that -bit floating point numbers give sufficient precision.

### 2.5 Interval Arithmetic

Our rigorous extension of the algorithm uses containment sets. By replacing all complex numbers with complex containment sets, we can modify the Schönhage-Strassen algorithm to find a containment set of ; if the containment set only contains one integer-valued vector, then we can be certain that this is the correct value. We have used rectangular containment sets of machine-representable floating-point intervals with directed rounding to guarantee the desired integer product. A brief overview of the needed interval analysis [6] is given next.

Let be real numbers with . Let be a closed and bounded real interval and let the set of all such intervals be . Note that since we allow thin or punctual intervals with . If is one of the arithmetic operators , , , , we define arithmetic over operands in by , with the exception that is undefined if . Due to continuity and monotonicity of the operations and compactness of the operands, arithmetic over is given by real arithmetic operations with the bounds:

 [a––,¯¯¯a]+[b–,¯¯b] =[a––+b–,¯¯¯a+¯¯b] −[b–,¯¯b] =[a––−¯¯b,¯¯¯a−b–] ⋅[b–,¯¯b] =[min{a––b–,a––¯¯b,¯¯¯ab–,¯¯¯a¯¯b},max{a––b–,a––¯¯b,¯¯¯ab–,¯¯¯a¯¯b}] /[b–,¯¯b] =[a––,¯¯¯a]⋅[1/¯¯b,1/b–],if 0∉[b–,¯¯b].

In addition to the above elementary operations over elements in , our algorithm requires us to contain the range of the square root function over elements in . Once again, due to the monotonicity of the square root function over non-negative reals it suffices to work with the real image of the bounds . To complete the requirements for our rigorous extension of the Schönhage-Strassen algorithm we need to extend addition, multiplication and division by a non-zero integer to elements in

 IC:={[z–,¯¯¯z]:=[z1––,¯z1]+i[z2––,¯¯¯¯¯z2]:[z1––,¯¯¯¯¯z1],[z2––,¯¯¯¯¯z2]∈IR}.

Interval arithmetic over naturally extends to , the set of rectangular complex intervals. Addition and subtraction over given by

 [z–,¯¯¯z]±[w––,¯¯¯¯w]=([z1––,¯¯¯¯¯z1]±[w1–––,¯¯¯¯¯¯w1])+i([z2––,¯¯¯¯¯z2]±[w2–––,¯¯¯¯¯¯w2])

are sharp but not multiplication or division due to rectangular wrapping effects. Complex interval multiplication and division of a complex interval by a non-negative integer can be contained with real interval multiplications given by

 [z–,¯¯¯z]⋅[w––,¯¯¯¯w]=([z1––,¯¯¯¯¯z1]⋅[w1–––,¯¯¯¯¯¯w1]−[z2––,¯¯¯¯¯z2]⋅[w2–––,¯¯¯¯¯¯w2])+i([z1––,¯¯¯¯¯z1]⋅[w2–––,¯¯¯¯¯¯w2]+[z2––,¯¯¯¯¯z2]⋅[w1–––,¯¯¯¯¯¯w1]).

See [4] for details about how C-XSC manipulates rectangular containment sets over and .

## 3 Results

We have implemented the Schönhage-Strassen algorithm, our containment-set version with rectangular complex intervals and long multiplication in C++ using the C-XSC library [4]. Our implementation is available at http://www.math.canterbury.ac.nz/~r.sainudiin/codes/capa/multiply/ 111Please also download the C-XSC library from http://www.math.uni-wuppertal.de/~xsc/.. Results show that, using base , our version of the algorithm is usually able to guarantee correct answers for up to 75,000-digit numbers.

The following graph compares the speed of long multiplication (labelled ‘Long multiplication’), the conventional Schönhage-Strassen algorithm with different underlying data types (the implementation using the C-XSC complex data type is the line labelled ‘complex naïve SS’ and the one using our own implementation of complex numbers based on the C++ double data type is labelled ‘double naïve SS’) and our containment-set version (‘cinterval extended SS’) on uniformly-random -digit base- inputs. All tests were performed on a 2.2 GHz 64-bit AMD Athlon 3500+ Processor running Ubuntu 9.04 using C-XSC version 2.2.4 and gcc version 4.3.1. Times were recorded using the C++ clock() function — that is to say, CPU time was recorded. Note that only the ‘Long multiplication’ and ‘cinterval extended SS’ implementations are guaranteed to produce correct results. The ‘double naïve SS’ and ‘complex naïve SS’ implementations may have produced erroneous results, as the implementations do not necessarily provide sufficient precision; these are still shown for comparison. Note also that by ‘naïve’ we mean that these implementations use fixed-precision floating-point arithmetic, whereas the ‘real’ Schönhage-Strassen algorithm uses variable-precision, which is much slower.

The above graph shows that the Schönhage-Strassen algorithm is much more efficient than long multiplication for large inputs. However, our modified version of the algorithm is slower than the naïve Schönhage-Strassen algorithm. We believe that C-XSC is not well-optimised; for example, their punctual complex data type (used in the ‘complex naïve SS’ implementation) is much slower than our double-based complex data type (used in the ‘double naïve SS’ implementation), even though ostensibly they are the same thing. We see that the C-XSC cinterval type (used in the ‘cinterval extended SS’ implementation) is about three times as slow as the complex type. This leaves the possibility that a more optimised implementation of containment sets would be able to compete with commercial algorithms. Investigations such as [3] have shown that the Naïve Schönhage-Strassen algorithm is able to compete with commercial implementations.

Note that the “steps” seen in the graph can be explained by the fact that the algorithm will always round the size up to the nearest power of two. Thus there are steps at the powers of two. The most important feature of our results is the range of input sizes for which our algorithm successfully determines the answer. Using only standard double-precision IEEE floating-point numbers, we are able to use the algorithm to multiply 75,000-digit, or 600,000-bit, integers; this range is more than sufficient for most applications, and at this point the second Schönhage-Strassen algorithm will become competitive.

## 4 Conclusion

Our investigation has demonstrated that the Schönhage-Strassen algorithm with containment sets is a practical algorithm that could be used reliably for applications such as cryptography. Schöhage and Strassen never showed that their algorithm had any practical value. However, as our implementation was not optimised, this is more of a feasibility study than a finished product.

Note that the advantage of our algorithm over the original Schönhage-Strassen algorithm is that we make use of hardware-based floating-point arithmetic, whereas the original is designed to make use of much slower software-based arithmetic. Both the original algorithm and our adaptation always produce correct results. However, we use a different approach to guaranteeing correctness. The naïve algorithms we mention are not guaranteed to be correct because they are modifications of the Schönhage-Strassen algorithm which does not take measures to ensure correctness — they simply use fixed-precision floating-point arithemtic and hope for the best; these are only useful for speed comparisons.

It remains to optimise our implementation of the algorithm to compete with commercial libraries. This is dependent on a faster implementation of interval arithmetic. It may also be interesting to use circular containment sets rather than rectangular containment sets. The advantage of circular containment sets is that they easily deal with complex rotations — that is, multiplying by ; this is in fact the only type of complex multiplication (other than division by an integer) that our algorithm performs.

## References

• [1]
• [2] Martin Fürer (2007): Faster Integer Multiplication. In: 39th ACM STOC, San Diego, California, USA, pp. 57–66.
• [3] Pierrick Gaudry, Alexander Kruppa & Paul Zimmermann (2007): A gmp-based implementation of schönhage-strassen’s large integer multiplication algorithm. In: ISSAC ’07: Proceedings of the 2007 international symposium on Symbolic and algebraic computation, ACM, New York, NY, USA, pp. 167–174.
• [4] Hofschuster & Krämer (2004): C-XSC 2.0: A C++ library for extended scientific computing. In: R Alt, A Frommer, RB Kearfott & W Luther, editors: Numerical software with result verification, Lecture notes in computer science 2991, Springer-Verlag, pp. 15–35.
• [5] Donald E. Knuth (1998): The Art of Computer Programming,  2. Addison-Wesley, 3 edition.
• [6] Ramon E. Moore, R. Baker Kearfott & Michael J. Cloud (2009): Introduction to Interval Analysis. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA.
• [7] Colin Percival (2003): Rapid multiplication modulo the sum and difference of highly composite numbers. Math. Comput. 72(241), pp. 387–395.
• [8] R.L. Rivest, A. Shamir & L. Adleman (1977): A Method for Obtaining Digital Signatures and Public-Key Cryptosystems. Communications of the ACM 21(2), pp. 120–126.
• [9] A. Schönhage & V. Strassen (1971): Schnelle Multiplikation großer Zahlen (Fast Multiplication of Large Numbers). Computing: Archiv für elektronisches Rechnen (Archives for electronic computing) 7, pp. 281–292. (German).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters