An Efficient Numerical Algorithm for the Optimal Transport Problem with Applications to Image processing
Abstract
We present a numerical method to solve the optimal transport problem with a quadratic cost when the source and target measures are periodic probability densities. This method is based on a numerical resolution of the corresponding MongeAmpère equation. We extend the damped Newton algorithm of Loeper and Rapetti LR () to the more general case of a non uniform density which is relevant to the optimal transport problem, and we show that our algorithm converges for sufficiently large damping coefficients. The main idea consists of designing an iterative scheme where the fully nonlinear equation is approximated by a nonconstant coefficient linear elliptic PDE that we solve numerically. We introduce several improvements and some new techniques for the numerical resolution of the corresponding linear system. Namely, we use a Fast Fourier Transform (FFT) method by Strain St (), which allows to increase the efficiency of our algorithm against the standard finite difference method. Moreover, we use a fourth order finite difference scheme to approximate the partial derivatives involved in the nonlinear terms of the Newton algorithm, which are evaluated once at each iteration; this leads to a significant improvement of the accuracy of the method, but does not sacrifice its efficiency. Finally, we present some numerical experiments which demonstrate the robustness and efficiency of our method on several examples of image processing, including an application to multiple sclerosis disease detection.
An Efficient Numerical Algorithm for the Optimal Transport Problem with Applications to Image processing
Department of Mathematics and Statistics, University of Victoria, PO BOX. 3060 STN, Victoria, B.C., V8W 3R4, Canada.
Keywords: MongeAmpère equation, optimal transport, numerical solution, Newton’s method, nonlinear PDE, image processing.
2010 MSC:
49M15, 35J96, 65N06, 68U10.
1 Introduction
The optimal transport problem, also known as the MongeKantorovich problem, originated from a famous engineering problem by Monge M () for which Kantorovich produced a more tractable relaxed formulation Kan (). This problem deals with the optimal way of allocating resources from one site to another while keeping the cost of transportation minimal. Formally, if is a probability measure modelling the distribution of material in the source domain , and is another probability measure modelling the structure of the target domain , the MongeKantorovich problem consists of finding the optimal transport plan in
(1) 
where denotes the cost of transporting a unit mass of material from a position to a location , and means that for all Borel sets , that is, the quantity of material supplied in a region of coincides with the total amount of material transported from the region of via the transport plan . When the cost function is quadratic, i.e. , the corresponding optimal transport problem in known as the optimal transport problem. This particular case has attracted many researchers in the past few decades, and a lot of interesting theoretical results have been obtained along with several applications in science and engineering, such as meteorology, fluid dynamics and mathematical economics. We refer to the recent monographs of Villani V (); Vi () for an account on these developments. One of the most important results concerns the form of the solution to the optimal transport problem. Indeed if both the source and target measures are absolutely continuous with respect to the Lebesgue measure on , , , Brenier B () showed that the optimal transport problem has a unique invertible solution ( a.e.) that is characterized by the gradient of a convex function, . Moreover, if and are sufficiently regular (in a sense to be specified later), it is proved that is of class and satisfies the MongeAmpère equation
(2) 
(see Delanoë De (), Caffarelli Ca1 (); Ca2 (), Urbas Ur ()). Therefore, for smooth source and target probability densities and , a convex solution to the MongeAmpère equation (2) provides the optimal solution to the optimal transport problem.
In this paper, we are interested in the numerical resolution of the optimal transport problem. Concerning this issue, only few numerical methods are available in the literature, e.g. BB (); CW (); HRT (). Even if some of these methods are efficient, they all have issues that call for improvement, most of the time regarding their convergence (which is not always guaranteed). Therefore, the numerical results they produce are sometimes not satisfactory. Although this list is not exhaustive, a more elaborate discussion on the advantages and disadvantages of each of these methods is given in the concluding remarks in Section 6. Our goal in this paper is to present an efficient numerical method which is stable and guaranteed to converge (before discretization). For that, contrarily to the previously existing methods, we propose to solve numerically the MongeAmpère equation (2) for a convex solution , then producing the solution to the optimal transport problem. The numerical resolution of the MongeAmpère equation is still a challenge, though some progress has been made recently, e.g. LR (); O (). In LR (), Loeper and Rapetti considered the MongeAmpère equation (2) in the particular case where the target density is uniform, , and used a damped Newton algorithm to solve the equation. They also provided a proof of convergence of their method (before discretization) under some appropriate regularity assumptions on the initial density. However, their assumption on the final density, (i.e. ), is too restrictive and therefore strongly limits the potential applications of their result, from an optimal transport point of view. Here, we extend their method to the general case where the final density is arbitrary (but sufficiently regular), which is the general context of the optimal transport problem, and we make their algorithm more efficient by presenting several numerical improvements. This is a novelty in our work compared to LR (). Specifically, we approximate the fully nonlinear PDE (2) by a sequence of linear elliptic PDE’s via a damped Newton algorithm. Then we extend the convergence result of LR () to the more general context where the final density is arbitrary (under some suitable regularity assumptions). Here several new techniques are introduced due to the difficulties arising from final density which is no more uniform in our work. Moreover, we present several numerical improvements to the method introduced in LR (). More precisely, we solve the linear PDE’s approximating (2) using two different discretization methods, namely, a standard second order finite difference implementation, used in LR (), and a fast Fourier transform (FFT) implementation (see Strain St ()). The FFT algorithm provides what appears to be a globally stable method. In addition to this FFT speed up compared to LR (), we also use for both implementations fourth order centered differences to approximate the first and second order derivatives involved in the nonlinear righthand side of the Newton algorithm. To prove the theoretical convergence of the solutions of the linear elliptic PDE’s to the actual convex solution of the MongeAmpère equation (2), we exploit interior a priori estimates on the classical solutions to the MongeAmpère equation. As far as we know, no global estimates that we could use are available for this equation. We thus restrict ourselves to a periodic setting to take advantage of the local estimates. Even with this restriction, our numerical method still gives in practice very good results when applied to a wide range of examples, periodic or nonperiodic (see Section 5).
The paper is organized as follows. In Section 2 we present the problem together with some background results that will be used later. In Section 3, we introduce the damped Newton algorithm for the MongeAmpère equation in the general case of the optimal transport problem and discuss its convergence. In Section 4, we propose two different ways of discretizing the algorithm, and then test these implementations on three examples in Section 5. One of these examples is taken from Medical imaging, namely, the detection of the multiple sclerosis (MS) disease in a brain magnetic resonance imaging (MRI) scan. Finally, we conclude with some remarks in Section 6.
2 Problem setting
In what follows, is an integer, and we denote by the canonical unit vector of . A function is said to be periodic if for all and . Note that for such a function, its values on the subset of are sufficient to define its entire values on the whole space . Based on this remark, we will identify in the sequel 1periodic functions on with their restrictions on . Now, let and be two probability measures absolutely continuous with respect to the Lebesgue measure on , and assume that their respective densities and are 1periodic. Then , and the optimal transport problem with these densities reads as
(3) 
Moreover, the unique solution to this problem (where is convex) satisfies the MongeAmpère equation (2) on . The regularity and boundary conditions corresponding to this MongeAmpère equation are given by the following theorem due to Cordero–Erausquin CE ().
Theorem 2.1.
Assume that and are defined as above and let and denote the infima and suprema of and respectively. Then there exists a convex function which pushes forward to (i.e. ) such that is additive in the sense that for almost every and for all . Moreover, is unique and invertible ( a.e.), and its inverse satisfies . In addition if and are of class with and if , then for some and it is a convex solution of the MongeAmpère equation (2).
Note that since is additive, it can be written as plus the gradient of a 1periodic function. Thus, we assume with for all , i.e. is 1periodic. So by using this change of function, , in the MongeAmpère equation (2), we see that the corresponding equation in satisfies a periodic boundary condition on . This justifies why we introduce this change of function in Section 3 to rewrite Eq. (2). In fact, the periodic boundary conditions will allow us to use interior a priori estimates for classical solutions of the MongeAmpère equation on the whole domain in order to prove the convergence of our algorithm (see Section 3.2). We also infer from this theorem that if , then is the unique (up to a constant) convex solution of the MongeAmpère equation (2) on . Finally, classical bootstraping arguments from the theory of elliptic regularity can be used to prove that if , then .
3 The Damped Newton algorithm
3.1 Derivation of the algorithm
Loeper and Rapetti presented in LR () a numerical method based on Newton’s algorithm to solve the equation
in a periodic setting. This equation can be associated with the optimal transport problem in the case where the target measure has a uniform density, i.e. . Here, we propose to extend this algorithm and the underlying analysis to the general case of an arbitrary smooth 1periodic density . Motivated by the remark made in Section 2, we follow LR () and introduce the change of function to rewrite the MongeAmpère equation (2) in the equivalent form
(4) 
Therefore, we will solve (4) for a 1periodic solution such that is convex on . Since we want to develop an algorithm based on Newton’s method, we first linearize (4). Indeed, using the formula for the derivative of the determinant LR (); PP (), we have
where . Also, from the usual Taylor expansion, we have
Multiplying the latter two expressions, we obtain that the derivative in direction of the right hand side of equation (4), denoted , is given by
(5) 
With this linearization at hand, we can now present the damped Newton algorithm that we will use to solve equation (4).
Damped Newton algorithm
(6) 
The factor () in the algorithm is used as a stepsize parameter to help preventing the method from diverging by taking a step that goes “too far”. As we will see below, the value of is crucial for the proof of convergence of the algorithm. Indeed, we will show that if we start with a constant initial guess for the Newton method, then there is a such that the method will converge (provided some extra conditions on the densities are satisfied). Furthermore, by modifying some results presented in GT (), it is possible to prove that a second order linear strictly elliptic pde with periodic boundary conditions has a unique solution up to a constant if its zerothorder coefficient is . The linearized MongeAmpère equation at step , which we will denote by , falls into this category through the setting of the algorithm. To fix the value of that constant, we select the solution satisfying . This is guaranteed by choosing a which satisfies this condition for every .
3.2 Proof of Convergence
To prove the theoretical convergence of our algorithm (6), we follow the arguments in LR (), but we introduce several new key steps to deal with the nonuniform final density . In particular, we rely on three a priori estimates for the solution of the MongeAmpère equation. The first one is derived by Liu, Trudinger and Wang LTW () and goes as follows: if for some positive constants , and if for some , then a convex solution of (2) satisfies
(7) 
where and is a constant which depends only on , , , and . The second one, discovered by Caffarelli V () and expressed by Forzani and Maldonado FM (), presents a bound on the Hölder component of . It states that if and are as in the previous estimate, there exists some constant such that
(8) 
for , where ,
and denote respectively the maximum and minimum of taken over the points such that . Since this estimate does not hold for all , one might wonder for which values it is actually valid. In FM (), it is shown that it holds for
Here is the volume of the dimensional unit ball. To give the reader an idea of these values, a few of them are presented in Table 1 below.
d  Rounded  

1  2  
2  
3  
4 
Finally, the third estimate controls the growth of the second derivatives of with respect to boundary values, provided and GT (); TW ():
(9) 
where depends only on and on the of in . We can now state and prove the theorem on the convergence of Algorithm (6). Note that the arguments of the proof are similar to LR (), but the fact that the target density is nonuniform here introduces some new difficulties that are worthwhile exposing. In addition, the proof provides important information that can be used to gain some intuition about the performance of the algorithm in practice.
Theorem 3.1.
Assume that and let be two positive 1periodic probability densities bounded away from 0. Assume that the initial guess for the Newton algorithm (6) is constant. Then if and for any , then there exits such that converges in , for any , to the unique – up to a constant – solution of the MongeAmpère equation (4). Moreover, depends only on , and which are defined as in theorem 2.1.
Proof.
First we note that due to the additivity of the transport map, by applying the change of variable , we can prove that for all ,
i.e. at every step, we are solving the optimal transport problem sending to . Moreover, unless otherwise stated, we only need to assume that and . The main steps of the proof consist in showing by induction that the following claims hold for all :

and are smooth, uniformly positive definite (u.p.d.) matrices, where denotes the identity matrix.

, where is independent of .

, where is independent of .
We say that a matrix is smooth if all of its coefficients are in ; it is u.p.d. (uniformly positive definite) if there exists a constant such that for all . It is also worth mentioning that the statement in 1) actually implies that is uniformly convex and that is a strictly elliptic linear operator.
Note that for constant, we have . Next, let
Then, it is easy to see that all the claims 1), 2) and 3) hold for . Let’s assume they hold for a certain and prove them for . For now, we suppose that the stepsize parameter could vary with . We shall prove later that we can actually take it to be constant without affecting any result. Let be the unique solution of such that . According to the results of GT () (modified for the periodic case) there exists a constant such that
(10) 
Because , we deduce that and then Adj are smooth. Now, since is u.p.d., by assumption we get:
for large enough, where is a positive constant. Hence is a u.p.d. matrix. Next, inspired by the Taylor expansions previously shown, we write in terms of as follows:
(11)  
Now we bound the residual . It is easy to see that an explicit formula for can be obtained from the second order terms of the Taylor expansion of the MongeAmpère operator, and it consists of a sum of a bunch of products of at least two first or second derivatives of with and its derivatives evaluated at and with second derivatives of . By (10), we know that we can bound the norm of the second derivatives of by a constant times . In addition, since , the Hölder norm of and its first and second derivatives are all uniformly bounded. We then deduce that
(12) 
where could potentially depend on the Hölder norms of the first and second derivatives of . Next, by selecting , (11), (12) and 3) imply
This shows that bound 3) is preserved for large enough. In addition, it shows that we can take the stepsize such that the sequence of bounds created recursively will converge to a constant strictly greater than . Let’s now verify bound 2). If we take , from all the previous results and hypothesis we get
from which we deduce that . Following a similar approach with a stepsize , we obtain the other part of 2). Then, we go back and finish the proof of the first statement. Knowing that is u.p.d., we see that and therefore is invertible. We can prove that its inverse is also a u.p.d. matrix. Indeed, if , we have
Using the inequality with and , we obtain . Next, motivated by the equivalence of norms, we use the bounds we derived previously to get
where is a positive constant. This yields the claim:
We now use these statements to show that is a strictly elliptic operator,
Note that by removing from the previous inequalities, we get that is a u.p.d. matrix, which completes the proof of 1). Now, we show that the stepsize can be taken constant, as claimed before. Indeed, 1) gives by construction while 2) yields and
Therefore, all the conditions to the estimate (7) are satisfied at every step. Using inequalities on Hölder norms, we find
At this point, the only remaining challenge is to bound . It can be achieved through the second estimate (8). Since is the transport map moving to , we can refer to Theorem 2.1 to deduce that is invertible and thus , which in turn yields when . Therefore, we see that the maximum terms are going to be uniformly bounded and that the only problem could come from the minimum terms , or . Using ideas from convex analysis (presented for example in HL ())), we can show that since is uniformly convex for every , we have where the minimum is taken on the sphere , (with the periodicity we can increase the size of to include it inside and still have a uniform bound on and ). Furthermore, if and only if , is strictly monotone increasing because is and . We see that the only possible breakdown happens when converges to a function which is zero up to . This means as and , for any . Observe now that if we increase the regularity of the densities to , , we get at every step. This tells us that (see GT ()) and thus . Therefore, we can apply estimate (9) and rule out this potential breakdown case. We obtain that the norm of is uniformly bounded and thus by the additivity of that function in a periodic setting, the same conclusion holds for its norm. Hence, we deduce that it is also the case for and then . From this, we get that we can select a constant such that the three statements hold for all by induction. Moreover, the sequence is uniformly bounded in , thus equicontinuous. By the AscoliArzela theorem, it converges uniformly in for to the solution of (4), which is unique since we impose . Finally, due to the fact that the initial and final densities are actually , we know that this solution will be in . ∎
3.3 Remarks on the Proof
This proof by induction provides a lot of precious information concerning the properties of the iterates created by our method. First, since is u.p.d. at every step, we realize that the sequence of functions is actually one of uniformly convex functions. Recall that the MongeAmpère equation (2) is elliptic only when we restrict it to the space of convex functions. Therefore, the algorithm is extra careful by approximating the convex solution of the MongeAmpère equation by a sequence of uniformly convex functions. In addition, this guarantees that the linearized equation is strictly elliptic and thus has a unique solution (once we fix the constant). Furthermore, just like in LR (), we can obtain estimates on the speed of convergence of the method. Indeed, assuming that , we got
which tells us that converges to following a geometric convergence with a rate of at least . When it comes to the stepsize parameter , it would be very useful to know a priori which value to select in order to make the algorithm converge. Such an estimate is unfortunately hard to acquire since some of the constants used through interior bounds are obtained via rather indirect arguments. However, we observe from lower bounds on used in the proof, i.e.
that the minimum value required on the stepsize parameter to achieve convergence could potentially be large when is close to or when either or is large. Through the numerous numerical experiments we conducted, we realized that seems to behave according to both conditions. Therefore, knowing a priori that could get close to , we can react accordingly by either increasing the value of the stepsize parameter or by modifying the representation of the densities (which is possible in some applications). Finally, even if our proof only guarantees convergence when the update is the solution of (5), in practice we can get good results by replacing it by the solution of , or sometimes by an even simpler equation.
4 Numerical Discretization
We present here a twodimensional implementation of the Newton algorithm (6). We consider a uniform grid with a spacestep where we identify with () by the periodicity. It is easy to see that the most important step for the efficiency of the method is the resolution of the linearized MongeAmpère equation. Indeed if we take to be the number of points on the grid ( in 2D), as every other step can be done in operations, the computational complexity of the whole method is dictated by the resolution of this linear pde. Therefore, we will introduce below two methods for solving this equation. For the other steps, we employ fourthorder accurate centered finite differences for the discretization of the first and second derivatives of . We thus improve considerably the accuracy of the results compared to LR () where second order differences are used to approximate these terms, but at the same time we do not decrease the efficiency of the whole algorithm whose complexity is dominated by the resolution of the linear PDE. If we know explicitly, then we can compute the compositions and directly. However, it is not always the case, especially when we deal with discrete data as in the examples of image processing in Section 5. In such circumstances, we have to approximate them. To do so, a popular choice would be to use a linear interpolation but in practice, we find that using only a closest neighbour interpolation gives good results in most scenarios. Another salient point is that even though in theory has a total mass of 1 at every step, it is not necessarily the case in the numerical experiments, due to discretization errors. However, we need the righthand side of the linearized MongeAmpère equation to integrate to 0 on the whole domain. To deal with this, we introduce a normalization step right after computing in the implemented algorithm, taking
(13) 
instead of and thus translating it at every step.
4.1 A Finite Differences Implementation
We begin by presenting an implementation of the resolution of the linearized MongeAmpère equation through finite differences. This choice is motivated by the fact that it is the method chosen by Loper and Rapetti in LR () for their corresponding algorithm. In this case, to reduce the complexity of the code, we only use centered finite differences of secondorder for the derivatives of . Since the linear pde has a unique solution only up to a constant, the linear system corresponding to its discretization has one free parameter that we need to fix to turn the matrix into an invertible one. A possible strategy to achieve this is to create a new system by adding the extra equation , which corresponds to selecting such that . Note that this new matrix has full rank, but it is not square. Then, we take that extra line, add it to all the other lines of and then delete it to get a square system, . The next lemma shows conditions under which the resolution of will produce a valid answer to the system . For the sake of notation, consider the new equation to be stored in the first line of .
Lemma 4.1.
Let and as defined above, i.e., is a matrix with rank , and there exist real numbers not all zero such that where is the line of . If , then has rank .
The proof is a straightforward use of matrix algebra and therefore not reported here for brevity. This lemma does not hold if condition is not satisfied. Take for example a matrix such that its second line is equal to the negative of its first line and all its other lines are linearly independent. Then has rank . Nonetheless, due to the structure of our problem, this is not going to happen. Unfortunately, this strategy has the downside of somewhat destroying the sparsity of the matrix. One way to avoid this would be to equivalently fix only the value of at a given point and then use the same strategy. This would preserve most of the sparsity of the matrix.
Next, to actually solve the system , we employ the Biconjugate Gradient (BICG) iterative method. This choice can be justified by the fact that we are dealing with a (potentially sparse) matrix which is not symmetric nor positive definite; the BICG procedure being specifically designed to deal with these conditions S (). One feature of this method is that, provided the method does not break down before, the sequence of approximate solutions it produces is guaranteed to converge to the solution of the linear system in a maximum of P steps, which yields a computational complexity of at worst . However, as we shall see later, in practice it can be much smaller than that. In addition, even if the BICG algorithm is commonly employed with a preconditionner, we did not find the need to do so while conducting our numerical experiments.
4.2 A Fourier Transform Implementation
One should realize that the first implementation we employed to solve the linearized MongeAmpère equation might not be the best method. Indeed, there exist much cheaper ways to solve a linear secondorder strictly elliptic equation with such boundary conditions. The one we are going to explore here is due to Strain St () and requires only operations through the use of the FFT algorithm. It consists in rewriting the problem as the system
(14) 
where is the averaged in the sense that its coefficients are the integral over of the coefficients of . We then expand in Fourier series by taking
where representing and being the usual Fourier coefficients. Using this expansion in the first part of (14) yields the formula
where
(15) 
For the discretized problem, knowing the value of , we can compute with one application of the FFT algorithm and then compute and with applications of the inverse FFT algorithm to be able to get the value of in operations. Therefore, we can use an iterative method to solve the first equation of system (14) at a cost of operations per iteration. As in Strain St (), we use the Generalized Minimal Residual method, or GMRES. Just like BICG, it is an efficient way of solving a linear system of equations where the matrix is nonsymmetric and nonpositive definite S (). Moreover, GMRES does not use projections on the Krylov subspace generated by the transposed matrix. This makes it easier to code for the particular setting we are dealing with since we do not form directly; we reference it instead through the result of its product with a given vector . Strain observed that the number of GMRES iterations required did not vary with , which yields a global complexity of . Note that for better performances, we actually employ like the author the restarted GMRES(m) method. After computing , we need to solve . This can be easily achieved since we already know the value of . More specifically, we have
i.e. it requires only one other application of the (inverse) FFT algorithm to obtain . On top of the efficiency of this method, observe that it has other advantages. It is spectrally accurate, i.e. the error decreases faster than any power of the grid size as the spacestep size goes to . We can also prove that the convergence rate for the GMRES algorithm is independent of the grid size. For more details, one should consult the original paper St (). In the actual discretization of this method, we truncate the sums in the usual way by varying from to . We compute the averages of the operator’s coefficients with Simpson’s numerical integration formula. Finally, the discrete linear system still has a solution unique only up to a constant and we can use the same strategy as in the previous case to fix it.
5 Numerical Tests
5.1 A theoretical example
Our goal here is to observe and compare the behaviour of the two implementations. Starting with a known and a known , we compute the corresponding righthand side with (4), and then we run the algorithm to obtain . We consider functions of the form
For the first implementation we select for the BICG algorithm a tolerance of and a maximum number of 1000 iterations per Newton step. For the FFT implementation, we take the same tolerance with a restarting threshold of inner iterations for the GMRES algorithm. In both cases, a value of was enough to achieve convergence. The errors and are plotted in Figure 1 as functions of the Newton iterations for both the FFT and finite differences and for various grid sizes ranging from to grid points. We see that in both cases the error gets smaller as we increase the grid size. In particular, for this value of , after the first iterations or so, where settles down very quickly, the convergence of follows a linear slope with a convergence rate slightly faster than a half. The estimated ratio is actually about in the FFT case and is about in the finite differences case, so the convergence is faster in this latter case for this final stage. Computing the observed order of accuracy from the errors between and , we get from smaller to bigger grid sizes, and . This confirms that the fourthorder is consistent with the order of the finite difference scheme used to compute the righthand side.
In order to investigate whether we can decrease the computing time without loosing too much precision on the results, we try to run the experiment again with a tolerance (see Figure 2). Due to the looser tolerance employed, the results are a bit erratic for the finite differences implementation, but overall still very good. Figure 2(c) shows the 3d plot of for in the Fourier transform case to get an idea of the distribution of the errors. As we can see, they seem evenly distributed on the whole domain. Figure 2(d) depicts what happens when we vary the value of the stepsize parameter . The results behave accordingly to our expectations, with a slower convergence for a bigger . Note that for this new tolerance, the computational cost of one iteration is now much less expensive and the global computing time decreases a lot in both cases. We can quantify this by looking at Table 2. Observe that the BICG algorithm required less operations than the worst case scenario . This being said, we still realize at first glance that the FFT method is much faster than the finite difference method. The number of GMRES iterations per Newton iteration stayed nearly constant as we increased the grid size, which confirms the computational complexity.
Total  Total  
Average number  computing time  Average number  computing time  
N  of  for  of  for 
GMRES iterations  Fourier  BICG iterations  Finite  
Transforms  Differences  
16  5.32  1.07  14.21  2.21 
32  6.37  1.94  17.79  8.70 
64  7.32  8.06  31.11  79.17 
128  7.95  34.38  63.32  1221.10 
256  8.05  145.07  134.63  34639.82 