An Efficient Numerical Algorithm for the L^{2} Optimal Transport Problem with Applications to Image processing

An Efficient Numerical Algorithm for the Optimal Transport Problem with Applications to Image processing

Abstract

We present a numerical method to solve the optimal transport problem with a quadratic cost when the source and target measures are periodic probability densities. This method is based on a numerical resolution of the corresponding Monge-Ampère equation. We extend the damped Newton algorithm of Loeper and Rapetti [16] to the more general case of a non uniform density which is relevant to the optimal transport problem, and we show that our algorithm converges for sufficiently large damping coefficients. The main idea consists of designing an iterative scheme where the fully nonlinear equation is approximated by a non-constant coefficient linear elliptic PDE that we solve numerically. We introduce several improvements and some new techniques for the numerical resolution of the corresponding linear system. Namely, we use a Fast Fourier Transform (FFT) method by Strain [24], which allows to increase the efficiency of our algorithm against the standard finite difference method. Moreover, we use a fourth order finite difference scheme to approximate the partial derivatives involved in the nonlinear terms of the Newton algorithm, which are evaluated once at each iteration; this leads to a significant improvement of the accuracy of the method, but does not sacrifice its efficiency. Finally, we present some numerical experiments which demonstrate the robustness and efficiency of our method on several examples of image processing, including an application to multiple sclerosis disease detection.

1Introduction

The optimal transport problem, also known as the Monge-Kantorovich problem, originated from a famous engineering problem by Monge [18] for which Kantorovich produced a more tractable relaxed formulation [14]. This problem deals with the optimal way of allocating resources from one site to another while keeping the cost of transportation minimal. Formally, if is a probability measure modelling the distribution of material in the source domain , and is another probability measure modelling the structure of the target domain , the Monge-Kantorovich problem consists of finding the optimal transport plan in

where denotes the cost of transporting a unit mass of material from a position to a location , and means that for all Borel sets , that is, the quantity of material supplied in a region of coincides with the total amount of material transported from the region of via the transport plan . When the cost function is quadratic, i.e. , the corresponding optimal transport problem in known as the optimal transport problem. This particular case has attracted many researchers in the past few decades, and a lot of interesting theoretical results have been obtained along with several applications in science and engineering, such as meteorology, fluid dynamics and mathematical economics. We refer to the recent monographs of Villani [27] for an account on these developments. One of the most important results concerns the form of the solution to the optimal transport problem. Indeed if both the source and target measures are absolutely continuous with respect to the Lebesgue measure on , , , Brenier [1] showed that the optimal transport problem has a unique invertible solution ( a.e.) that is characterized by the gradient of a convex function, . Moreover, if and are sufficiently regular (in a sense to be specified later), it is proved that is of class and satisfies the Monge-Ampère equation

(see Delanoë [8], Caffarelli [4], Urbas [26]). Therefore, for smooth source and target probability densities and , a convex solution to the Monge-Ampère equation (Equation 2) provides the optimal solution to the optimal transport problem.

In this paper, we are interested in the numerical resolution of the optimal transport problem. Concerning this issue, only few numerical methods are available in the literature, e.g. [2]. Even if some of these methods are efficient, they all have issues that call for improvement, most of the time regarding their convergence (which is not always guaranteed). Therefore, the numerical results they produce are sometimes not satisfactory. Although this list is not exhaustive, a more elaborate discussion on the advantages and disadvantages of each of these methods is given in the concluding remarks in Section Section 6. Our goal in this paper is to present an efficient numerical method which is stable and guaranteed to converge (before discretization). For that, contrarily to the previously existing methods, we propose to solve numerically the Monge-Ampère equation (Equation 2) for a convex solution , then producing the solution to the optimal transport problem. The numerical resolution of the Monge-Ampère equation is still a challenge, though some progress has been made recently, e.g. [16]. In [16], Loeper and Rapetti considered the Monge-Ampère equation (Equation 2) in the particular case where the target density is uniform, , and used a damped Newton algorithm to solve the equation. They also provided a proof of convergence of their method (before discretization) under some appropriate regularity assumptions on the initial density. However, their assumption on the final density, (i.e. ), is too restrictive and therefore strongly limits the potential applications of their result, from an optimal transport point of view. Here, we extend their method to the general case where the final density is arbitrary (but sufficiently regular), which is the general context of the optimal transport problem, and we make their algorithm more efficient by presenting several numerical improvements. This is a novelty in our work compared to [16]. Specifically, we approximate the fully nonlinear PDE (Equation 2) by a sequence of linear elliptic PDE’s via a damped Newton algorithm. Then we extend the convergence result of [16] to the more general context where the final density is arbitrary (under some suitable regularity assumptions). Here several new techniques are introduced due to the difficulties arising from final density which is no more uniform in our work. Moreover, we present several numerical improvements to the method introduced in [16]. More precisely, we solve the linear PDE’s approximating (Equation 2) using two different discretization methods, namely, a standard second order finite difference implementation, used in [16], and a fast Fourier transform (FFT) implementation (see Strain [24]). The FFT algorithm provides what appears to be a globally stable method. In addition to this FFT speed up compared to [16], we also use for both implementations fourth order centered differences to approximate the first and second order derivatives involved in the nonlinear right-hand side of the Newton algorithm. To prove the theoretical convergence of the solutions of the linear elliptic PDE’s to the actual convex solution of the Monge-Ampère equation (Equation 2), we exploit interior a priori estimates on the classical solutions to the Monge-Ampère equation. As far as we know, no global estimates that we could use are available for this equation. We thus restrict ourselves to a periodic setting to take advantage of the local estimates. Even with this restriction, our numerical method still gives in practice very good results when applied to a wide range of examples, periodic or non-periodic (see Section 5).

The paper is organized as follows. In Section 2 we present the problem together with some background results that will be used later. In Section 3, we introduce the damped Newton algorithm for the Monge-Ampère equation in the general case of the optimal transport problem and discuss its convergence. In Section 4, we propose two different ways of discretizing the algorithm, and then test these implementations on three examples in Section 5. One of these examples is taken from Medical imaging, namely, the detection of the multiple sclerosis (MS) disease in a brain magnetic resonance imaging (MRI) scan. Finally, we conclude with some remarks in Section 6.

2Problem setting

In what follows, is an integer, and we denote by the -canonical unit vector of . A function is said to be -periodic if for all and . Note that for such a function, its values on the subset of are sufficient to define its entire values on the whole space . Based on this remark, we will identify in the sequel 1-periodic functions on with their restrictions on . Now, let and be two probability measures absolutely continuous with respect to the Lebesgue measure on , and assume that their respective densities and are 1-periodic. Then , and the optimal transport problem with these densities reads as

Moreover, the unique solution to this problem (where is convex) satisfies the Monge-Ampère equation (Equation 2) on . The regularity and boundary conditions corresponding to this Monge-Ampère equation are given by the following theorem due to Cordero–Erausquin [6].

Note that since is additive, it can be written as plus the gradient of a 1-periodic function. Thus, we assume with for all , i.e. is 1-periodic. So by using this change of function, , in the Monge-Ampère equation (Equation 2), we see that the corresponding equation in satisfies a periodic boundary condition on . This justifies why we introduce this change of function in Section 3 to rewrite Eq. (Equation 2). In fact, the periodic boundary conditions will allow us to use interior a priori estimates for classical solutions of the Monge-Ampère equation on the whole domain in order to prove the convergence of our algorithm (see Section 3.2). We also infer from this theorem that if , then is the unique (up to a constant) convex solution of the Monge-Ampère equation (Equation 2) on . Finally, classical bootstraping arguments from the theory of elliptic regularity can be used to prove that if , then .

3The Damped Newton algorithm

3.1Derivation of the algorithm

Loeper and Rapetti presented in [16] a numerical method based on Newton’s algorithm to solve the equation

in a periodic setting. This equation can be associated with the optimal transport problem in the case where the target measure has a uniform density, i.e. . Here, we propose to extend this algorithm and the underlying analysis to the general case of an arbitrary smooth 1-periodic density . Motivated by the remark made in Section 2, we follow [16] and introduce the change of function to rewrite the Monge-Ampère equation (Equation 2) in the equivalent form

Therefore, we will solve (Equation 4) for a 1-periodic solution such that is convex on . Since we want to develop an algorithm based on Newton’s method, we first linearize (Equation 4). Indeed, using the formula for the derivative of the determinant [16], we have

where . Also, from the usual Taylor expansion, we have

Multiplying the latter two expressions, we obtain that the derivative in direction of the right hand side of equation (Equation 4), denoted , is given by

With this linearization at hand, we can now present the damped Newton algorithm that we will use to solve equation (Equation 4).

Damped Newton algorithm

The factor () in the algorithm is used as a step-size parameter to help preventing the method from diverging by taking a step that goes “too far”. As we will see below, the value of is crucial for the proof of convergence of the algorithm. Indeed, we will show that if we start with a constant initial guess for the Newton method, then there is a such that the method will converge (provided some extra conditions on the densities are satisfied). Furthermore, by modifying some results presented in [11], it is possible to prove that a second order linear strictly elliptic pde with periodic boundary conditions has a unique solution up to a constant if its zeroth-order coefficient is . The linearized Monge-Ampère equation at step , which we will denote by , falls into this category through the setting of the algorithm. To fix the value of that constant, we select the solution satisfying . This is guaranteed by choosing a which satisfies this condition for every .

3.2Proof of Convergence

To prove the theoretical convergence of our algorithm (Equation 6), we follow the arguments in [16], but we introduce several new key steps to deal with the non-uniform final density . In particular, we rely on three a priori estimates for the solution of the Monge-Ampère equation. The first one is derived by Liu, Trudinger and Wang [17] and goes as follows: if for some positive constants , and if for some , then a convex solution of (Equation 2) satisfies

where and is a constant which depends only on , , , and . The second one, discovered by Caffarelli [27] and expressed by Forzani and Maldonado [9], presents a bound on the Hölder component of . It states that if and are as in the previous estimate, there exists some constant such that

for , where ,

and denote respectively the maximum and minimum of taken over the points such that . Since this estimate does not hold for all , one might wonder for which values it is actually valid. In [9], it is shown that it holds for

Here is the volume of the -dimensional unit ball. To give the reader an idea of these values, a few of them are presented in Table 1 below.

Table 1: Quantities involved in the bounds on for
d Rounded
1 2
2
3
4

Finally, the third estimate controls the growth of the second derivatives of with respect to boundary values, provided and [11]:

where depends only on and on the of in . We can now state and prove the theorem on the convergence of Algorithm (Equation 6). Note that the arguments of the proof are similar to [16], but the fact that the target density is non-uniform here introduces some new difficulties that are worthwhile exposing. In addition, the proof provides important information that can be used to gain some intuition about the performance of the algorithm in practice.

First we note that due to the additivity of the transport map, by applying the change of variable , we can prove that for all ,

i.e. at every step, we are solving the optimal transport problem sending to . Moreover, unless otherwise stated, we only need to assume that and . The main steps of the proof consist in showing by induction that the following claims hold for all :

  1. and are smooth, uniformly positive definite (u.p.d.) matrices, where denotes the identity matrix.

  2. , where is independent of .

  3. , where is independent of .

We say that a matrix is smooth if all of its coefficients are in ; it is u.p.d. (uniformly positive definite) if there exists a constant such that for all . It is also worth mentioning that the statement in 1) actually implies that is uniformly convex and that is a strictly elliptic linear operator.

Note that for constant, we have . Next, let

Then, it is easy to see that all the claims 1), 2) and 3) hold for . Let’s assume they hold for a certain and prove them for . For now, we suppose that the step-size parameter could vary with . We shall prove later that we can actually take it to be constant without affecting any result. Let be the unique solution of such that . According to the results of [11] (modified for the periodic case) there exists a constant such that

Because , we deduce that and then Adj are smooth. Now, since is u.p.d., by assumption we get:

for large enough, where is a positive constant. Hence is a u.p.d. matrix. Next, inspired by the Taylor expansions previously shown, we write in terms of as follows:

Now we bound the residual . It is easy to see that an explicit formula for can be obtained from the second order terms of the Taylor expansion of the Monge-Ampère operator, and it consists of a sum of a bunch of products of at least two first or second derivatives of with and its derivatives evaluated at and with second derivatives of . By (Equation 10), we know that we can bound the norm of the second derivatives of by a constant times . In addition, since , the Hölder norm of and its first and second derivatives are all uniformly bounded. We then deduce that

where could potentially depend on the Hölder norms of the first and second derivatives of . Next, by selecting , (), () and 3) imply

This shows that bound 3) is preserved for large enough. In addition, it shows that we can take the step-size such that the sequence of bounds created recursively will converge to a constant strictly greater than . Let’s now verify bound 2). If we take , from all the previous results and hypothesis we get

from which we deduce that . Following a similar approach with a step-size , we obtain the other part of 2). Then, we go back and finish the proof of the first statement. Knowing that is u.p.d., we see that and therefore is invertible. We can prove that its inverse is also a u.p.d. matrix. Indeed, if , we have

Using the inequality with and , we obtain . Next, motivated by the equivalence of norms, we use the bounds we derived previously to get

where is a positive constant. This yields the claim:

We now use these statements to show that is a strictly elliptic operator,

Note that by removing from the previous inequalities, we get that is a u.p.d. matrix, which completes the proof of 1). Now, we show that the step-size can be taken constant, as claimed before. Indeed, 1) gives by construction while 2) yields and

Therefore, all the conditions to the estimate (Equation 7) are satisfied at every step. Using inequalities on Hölder norms, we find

At this point, the only remaining challenge is to bound . It can be achieved through the second estimate (Equation 8). Since is the transport map moving to , we can refer to Theorem ? to deduce that is invertible and thus , which in turn yields when . Therefore, we see that the maximum terms are going to be uniformly bounded and that the only problem could come from the minimum terms , or . Using ideas from convex analysis (presented for example in [12])), we can show that since is uniformly convex for every , we have where the minimum is taken on the sphere , (with the periodicity we can increase the size of to include it inside and still have a uniform bound on and ). Furthermore, if and only if , is strictly monotone increasing because is and . We see that the only possible breakdown happens when converges to a function which is zero up to . This means as and , for any . Observe now that if we increase the regularity of the densities to , , we get at every step. This tells us that (see [11]) and thus . Therefore, we can apply estimate (Equation 9) and rule out this potential breakdown case. We obtain that the norm of is uniformly bounded and thus by the additivity of that function in a periodic setting, the same conclusion holds for its norm. Hence, we deduce that it is also the case for and then . From this, we get that we can select a constant such that the three statements hold for all by induction. Moreover, the sequence is uniformly bounded in , thus equicontinuous. By the Ascoli-Arzela theorem, it converges uniformly in for to the solution of (Equation 4), which is unique since we impose . Finally, due to the fact that the initial and final densities are actually , we know that this solution will be in .

3.3Remarks on the Proof

This proof by induction provides a lot of precious information concerning the properties of the iterates created by our method. First, since is u.p.d. at every step, we realize that the sequence of functions is actually one of uniformly convex functions. Recall that the Monge-Ampère equation (Equation 2) is elliptic only when we restrict it to the space of convex functions. Therefore, the algorithm is extra careful by approximating the convex solution of the Monge-Ampère equation by a sequence of uniformly convex functions. In addition, this guarantees that the linearized equation is strictly elliptic and thus has a unique solution (once we fix the constant). Furthermore, just like in [16], we can obtain estimates on the speed of convergence of the method. Indeed, assuming that , we got

which tells us that converges to following a geometric convergence with a rate of at least . When it comes to the step-size parameter , it would be very useful to know a priori which value to select in order to make the algorithm converge. Such an estimate is unfortunately hard to acquire since some of the constants used through interior bounds are obtained via rather indirect arguments. However, we observe from lower bounds on used in the proof, i.e.

that the minimum value required on the step-size parameter to achieve convergence could potentially be large when is close to or when either or is large. Through the numerous numerical experiments we conducted, we realized that seems to behave according to both conditions. Therefore, knowing a priori that could get close to , we can react accordingly by either increasing the value of the step-size parameter or by modifying the representation of the densities (which is possible in some applications). Finally, even if our proof only guarantees convergence when the update is the solution of (Equation 5), in practice we can get good results by replacing it by the solution of , or sometimes by an even simpler equation.

4Numerical Discretization

We present here a two-dimensional implementation of the Newton algorithm (Equation 6). We consider a uniform grid with a space-step where we identify with () by the periodicity. It is easy to see that the most important step for the efficiency of the method is the resolution of the linearized Monge-Ampère equation. Indeed if we take to be the number of points on the grid ( in 2D), as every other step can be done in operations, the computational complexity of the whole method is dictated by the resolution of this linear pde. Therefore, we will introduce below two methods for solving this equation. For the other steps, we employ fourth-order accurate centered finite differences for the discretization of the first and second derivatives of . We thus improve considerably the accuracy of the results compared to [16] where second order differences are used to approximate these terms, but at the same time we do not decrease the efficiency of the whole algorithm whose complexity is dominated by the resolution of the linear PDE. If we know explicitly, then we can compute the compositions and directly. However, it is not always the case, especially when we deal with discrete data as in the examples of image processing in Section 5. In such circumstances, we have to approximate them. To do so, a popular choice would be to use a linear interpolation but in practice, we find that using only a closest neighbour interpolation gives good results in most scenarios. Another salient point is that even though in theory has a total mass of 1 at every step, it is not necessarily the case in the numerical experiments, due to discretization errors. However, we need the right-hand side of the linearized Monge-Ampère equation to integrate to 0 on the whole domain. To deal with this, we introduce a normalization step right after computing in the implemented algorithm, taking

instead of and thus translating it at every step.

4.1A Finite Differences Implementation

We begin by presenting an implementation of the resolution of the linearized Monge-Ampère equation through finite differences. This choice is motivated by the fact that it is the method chosen by Loper and Rapetti in [16] for their corresponding algorithm. In this case, to reduce the complexity of the code, we only use centered finite differences of second-order for the derivatives of . Since the linear pde has a unique solution only up to a constant, the linear system corresponding to its discretization has one free parameter that we need to fix to turn the matrix into an invertible one. A possible strategy to achieve this is to create a new system by adding the extra equation , which corresponds to selecting such that . Note that this new matrix has full rank, but it is not square. Then, we take that extra line, add it to all the other lines of and then delete it to get a square system, . The next lemma shows conditions under which the resolution of will produce a valid answer to the system . For the sake of notation, consider the new equation to be stored in the first line of .

The proof is a straightforward use of matrix algebra and therefore not reported here for brevity. This lemma does not hold if condition is not satisfied. Take for example a matrix such that its second line is equal to the negative of its first line and all its other lines are linearly independent. Then has rank . Nonetheless, due to the structure of our problem, this is not going to happen. Unfortunately, this strategy has the downside of somewhat destroying the sparsity of the matrix. One way to avoid this would be to equivalently fix only the value of at a given point and then use the same strategy. This would preserve most of the sparsity of the matrix.

Next, to actually solve the system , we employ the Biconjugate Gradient (BICG) iterative method. This choice can be justified by the fact that we are dealing with a (potentially sparse) matrix which is not symmetric nor positive definite; the BICG procedure being specifically designed to deal with these conditions [22]. One feature of this method is that, provided the method does not break down before, the sequence of approximate solutions it produces is guaranteed to converge to the solution of the linear system in a maximum of P steps, which yields a computational complexity of at worst . However, as we shall see later, in practice it can be much smaller than that. In addition, even if the BICG algorithm is commonly employed with a preconditionner, we did not find the need to do so while conducting our numerical experiments.

4.2A Fourier Transform Implementation

One should realize that the first implementation we employed to solve the linearized Monge-Ampère equation might not be the best method. Indeed, there exist much cheaper ways to solve a linear second-order strictly elliptic equation with such boundary conditions. The one we are going to explore here is due to Strain [24] and requires only operations through the use of the FFT algorithm. It consists in rewriting the problem as the system

where is the averaged in the sense that its coefficients are the integral over of the coefficients of . We then expand in Fourier series by taking

where representing and being the usual Fourier coefficients. Using this expansion in the first part of (Equation 14) yields the formula

where

For the discretized problem, knowing the value of , we can compute with one application of the FFT algorithm and then compute and with applications of the inverse FFT algorithm to be able to get the value of in operations. Therefore, we can use an iterative method to solve the first equation of system (Equation 14) at a cost of operations per iteration. As in Strain [24], we use the Generalized Minimal Residual method, or GMRES. Just like BICG, it is an efficient way of solving a linear system of equations where the matrix is non-symmetric and non-positive definite [22]. Moreover, GMRES does not use projections on the Krylov subspace generated by the transposed matrix. This makes it easier to code for the particular setting we are dealing with since we do not form directly; we reference it instead through the result of its product with a given vector . Strain observed that the number of GMRES iterations required did not vary with , which yields a global complexity of . Note that for better performances, we actually employ like the author the restarted GMRES(m) method. After computing , we need to solve . This can be easily achieved since we already know the value of . More specifically, we have

i.e. it requires only one other application of the (inverse) FFT algorithm to obtain . On top of the efficiency of this method, observe that it has other advantages. It is spectrally accurate, i.e. the error decreases faster than any power of the grid size as the space-step size goes to . We can also prove that the convergence rate for the GMRES algorithm is independent of the grid size. For more details, one should consult the original paper [24]. In the actual discretization of this method, we truncate the sums in the usual way by varying from to . We compute the averages of the operator’s coefficients with Simpson’s numerical integration formula. Finally, the discrete linear system still has a solution unique only up to a constant and we can use the same strategy as in the previous case to fix it.

5Numerical Tests

5.1A theoretical example

  
  

Our goal here is to observe and compare the behaviour of the two implementations. Starting with a known and a known , we compute the corresponding right-hand side with (Equation 4), and then we run the algorithm to obtain . We consider functions of the form

For the first implementation we select for the BICG algorithm a tolerance of and a maximum number of 1000 iterations per Newton step. For the FFT implementation, we take the same tolerance with a restarting threshold of inner iterations for the GMRES algorithm. In both cases, a value of was enough to achieve convergence. The errors and are plotted in Figure ? as functions of the Newton iterations for both the FFT and finite differences and for various grid sizes ranging from to grid points. We see that in both cases the error gets smaller as we increase the grid size. In particular, for this value of , after the first iterations or so, where settles down very quickly, the convergence of follows a linear slope with a convergence rate slightly faster than a half. The estimated ratio is actually about in the FFT case and is about in the finite differences case, so the convergence is faster in this latter case for this final stage. Computing the observed order of accuracy from the errors between and , we get from smaller to bigger grid sizes, and . This confirms that the fourth-order is consistent with the order of the finite difference scheme used to compute the right-hand side.

In order to investigate whether we can decrease the computing time without loosing too much precision on the results, we try to run the experiment again with a tolerance (see Figure ?). Due to the looser tolerance employed, the results are a bit erratic for the finite differences implementation, but overall still very good. Figure ? shows the 3d plot of for in the Fourier transform case to get an idea of the distribution of the errors. As we can see, they seem evenly distributed on the whole domain. Figure ? depicts what happens when we vary the value of the step-size parameter . The results behave accordingly to our expectations, with a slower convergence for a bigger . Note that for this new tolerance, the computational cost of one iteration is now much less expensive and the global computing time decreases a lot in both cases. We can quantify this by looking at Table 2. Observe that the BICG algorithm required less operations than the worst case scenario . This being said, we still realize at first glance that the FFT method is much faster than the finite difference method. The number of GMRES iterations per Newton iteration stayed nearly constant as we increased the grid size, which confirms the computational complexity.

  

  

Table 2: Average number of BICG and GMRES iterations per Newton iteration and total computing time in seconds for the whole experiment (20 iterations) when the tolerance is set to . We used a MATLAB implementation on an Intel Xeon running at 2.33 GHZ. This is presented for all the different grid sizes.
Total Total
Average number computing time Average number computing time
N of for of for
GMRES iterations Fourier BICG iterations Finite
Transforms Differences
16 5.32 1.07 14.21 2.21
32 6.37 1.94 17.79 8.70
64 7.32 8.06 31.11 79.17
128 7.95 34.38 63.32 1221.10
256 8.05 145.07 134.63 34639.82

Finally, in order to get an idea of the stability properties of both methods, we can measure the norm of the inverse of the matrix corresponding to the discretization of the linearized Monge-Ampère operator (see for example [15] for more information on the stability concept for iterative methods). This can be achieved by computing the spectral radius of such matrix. In the finite differences case, we could get this eigenvalue directly by first obtaining the inverse, and then computing the eigenvalues of the new matrix. We observed that for the current experiment, the spectral radius starts at about for and then grows almost linearly as we increase the grid size. Hence, the finite difference method appears unstable. For the FFT case, since we do not possess an explicit representation of the matrix, we have to use an indirect method to compute the spectral radius. The one we select is the power method (or power iteration). For a matrix , it starts with a vector and compute the iterates . If has a dominant eigenvalue and if has a non-zero component in the direction of the eigenvector associated with this largest eigenvalue, then the sequence converges to the eigenvector associated with the spectral radius of (see [10]). For the current experiment, we apply this technique to the inverse matrix produced at every Newton step by the discretization of the linearized Monge-Ampère equation via the FFT implementation. More specifically, for a given , we start with a randomly generated with components in . Then, using the method presented in Section 4.2, we compute the product and then the iterate with . Repeating this procedure several times with different random vectors , we observed that the power iteration always converges to a number close to after only about iterations, for every from to and for every from to . Thus, we conclude for this specific example that the spectral radius of the inverse matrix generated at every step of the Newton method is close to , which of course suggests that the FFT implementation is stable.

5.2Application to medical imaging

Here, we test our algorithm on one of the various applications of optimal transport. In the area of image processing, one of the most common tasks performed by practitioners is to determine a geometric correspondence between images taken from the same scene in order to compare them or to integrate the information they contain to obtain more meaningful data. One could think of pictures acquired at different times, with different equipments or from different viewpoints. This process falls into the category of what is referred to as image registration. There are two main types of image registration methods: the rigid ones which involve translations or rotations and the nonrigid ones where some stretching of the image is required to map it to the other one. People working on optimal transport recently realized that this theory could provide a good nonrigid image registration technique. Indeed, consider for example two grayscale images. We could think of them as representing a mass distribution of the amount of light “piled up” at a given location. A bright pixel on that image would then represent a region with more mass whereas a darker pixel would correspond to a region with less mass. Computing the optimal map between the two images and analyzing the rate of change of that map could reveal the best way (in terms of minimizing the transportation distance) of moving the mass from the first density to the second, precisely showing what is changing on the images and how it is happening. In [21], Rehman et al. actually lists several advantages of the optimal transport method for image registration. However they also stress the fact that it is computationally expensive and this is one reason why it is important to find efficient numerical methods to solve this problem.

Our first applied test is in the field of medical imagery. We consider the two brain MRI scans presented in Figure ?. These images were taken from the BrainWeb simulated brain database at McGill university [3] and represent a slice of a healthy brain and a slice of the same brain where the multiple sclerosis disease is spreading. This nervous system disease damages the myelin sheets around the axons of the brain and leaves scars (or sclerosis) visible on an MRI. We chose MS as a test case since its actual detection process relies on neuro-imaging which tries to identify the scars whose presence leaves traces similar to multiple tumours. Note that because the scans are dark, their representation in greyscale contains many values close or equal to 0. For the obvious reason of accuracy, we normalize the densities and bound them away from 0 by applying the translation in (Equation 13) on both of them before initiating the algorithm. In addition, we rescale them so that they are exactly square ( pixels). This is not required, but helps simplifying the code.

  

On the bottom panels of Figure ?, we show the results reached after only 4 Newton iterations with a and a tolerance of for the Fourier transform implementation. We note that the corresponding norm of the error between and is reduced to about . The 3d plot in Figure ? is characterized by very sharp spikes corresponding to variations in brightness between the two images where the scars are located. To get a better visual understanding of the situation, we also took the graph of the filtered contour plot, coloured in white the inside of the contour lines corresponding to the affected regions and we superposed this image to the MRI scan of the healthy brain (Figure ?). The number of GMRES iterations required per Newton iteration was very small and nearly constant (only 1 outer iteration and about 6 inner ones). Even if our code was not necessarily optimized in terms of speed, it only took about 30 seconds to compute these results on an Intel Xeon with 2.33 GHZ of RAM. Moreover, the spectral radius was still very close to for the inverse discretization matrix, which is very encouraging. We also need to mention that we don’t have access here to an analytical expression for or . Therefore, to compute an approximation for and at every grid point , we employ a closest neighbour approximation and like previously pointed out, the results are still very good. In addition to that change detection, the optimal transport plan actually gives us precise information on the amount of variation from one MRI scan to the other. Indeed, we can define a metric between probability densities from the solution to the transport problem; the distance being