Notions of optimal transport theory and how to implement them on a computer

# Notions of optimal transport theory and how to implement them on a computer

Bruno Lévy and Erica Schwindt
###### Abstract

This article gives an introduction to optimal transport, a mathematical theory that makes it possible to measure distances between functions (or distances between more general objects), to interpolate between objects or to enforce mass/volume conservation in certain computational physics simulations. Optimal transport is a rich scientific domain, with active research communities, both on its theoretical aspects and on more applicative considerations, such as geometry processing and machine learning. This article aims at explaining the main principles behind the theory of optimal transport, introduce the different involved notions, and more importantly, how they relate, to let the reader grasp an intuition of the elegant theory that structures them. Then we will consider a specific setting, called semi-discrete, where a continuous function is transported to a discrete sum of Dirac masses. Studying this specific setting naturally leads to an efficient computational algorithm, that uses classical notions of computational geometry, such as a generalization of Voronoi diagrams called Laguerre diagrams.

## 1 Introduction

This article presents an introduction to optimal transport. It summarizes and complements a series of conferences given by B. Lévy between 2014 and 2017. The presentations stays at an elementary level, that corresponds to a computer scientist’s vision of the problem. In the article, we stick to using standard notions of analysis (functions, integrals) and linear algebra (vectors, matrices), and give an intuition of the notion of measure. The main objective of the presentation is to understand the overall structure of the reasoning 111Teach principles, not equations. [R. Feynman], and to follow a continuous path from the theory to an efficient algorithm that can be implemented in a computer.

Optimal transport, initially studied by Monge, [Mon84], is a very general mathematical framework that can be used to model a wide class of application domains. In particular, it is a natural formulation for several fundamental questions in computer graphics [Mém11, Mér11, BvdPPH11], because it makes it possible to define new ways of comparing functions, of measuring distances between functions and interpolating between two (or more) functions :

#### Comparing functions

Consider the functions , and in Figure 1. Here we have chosen a function with a wildly oscillating graph, and a function obtained by translating the graph of along the axis. The function corresponds to the mean value of (or ). If one measures the relative distances between these functions using the classical norm, that is , one will find that is nearer to than . Optimal transport makes it possible to define a distance that will take into account that the graph of can be obtained from through a translation (like here), or through a deformation of the graph of . From the point of view of this new distance, the function will be nearer to than to .

#### Interpolating between functions:

Now consider the functions and in Figure 2. Here we suppose that corresponds to a certain physical quantity measured at an initial time and that corresponds to the same phenomenon measured at a final time . The problem that we consider now consists in reconstructing what happened between and . If we use linear interpolation (Figure 2, top-right), we will see the left hump progressively disappearing while the right hump progressively appears, which is not very realistic if the functions represent for instance a propagating wave. Optimal transport makes it possible to define another type of interpolation (Mc. Cann’s displacement interpolation, Figure 2, bottom-right), that will progressively displace and morph the graph of into the graph of .

Optimal transport makes it possible to define a geometry of a space of functions222or more general objects, called probability measures, more on this later., and thus gives a definition of distance in this space, as well as means of interpolating between different functions, and in general, defining the barycenter of a weighted family of functions, in a very general context. Thus, optimal transport appears as a fundamental tool in many applied domains. In computer graphics, applications were proposed, to compare and interpolate objects of diverse natures [BvdPPH11], to generate lenses that can concentrate light to form caustics in a prescribed manner [MMT17, STTP14]. Moreover, optimal transport defines new tools that can be used to discretize Partial Differential Equations, and define new numerical solution mechanisms [BCMO14]. This type of numerical solution mechanism can be used to simulate for instance fluids [GM17], with spectacular applications and results in computer graphics [dGWH15].

The two sections that follow are partly inspired by [Vil09], [San14], [Caf03] and [AG13], but stay at an elementary level. Here the main goal is to give an intuition of the different concepts, and more importantly an idea of the way the relate together. Finally we will see how they can be directly used to design a computational algorithm with very good performance, that can be used in practice in several application domains.

## 2 Monge’s problem

The initial optimal transport problem was first introduced and studied by Monge, right before the French revolution [Mon84]. We first give an intuitive idea of the problem, then quickly introduce the notion of measure, that is necessary to formally state the problem in its most general form and to analyze it.

### 2.1 Intuition

Monge’s initial motivation to study this problem was very practical: supposing you have an army of workmen, how can you transform a terrain with an initial landscape into a given desired target landscape, while minimizing the total amount of work ?

Monge’s initial problem statement was as follows:

 infT:X→X∫Xc(x,T(x))u(x)dx\ \ \ \ subject to: ∀B⊂X,∫T−1(B)u(x)dx=∫Bv(x)dx

where is a subset of , and are two positive functions defined on and such that = , and is a convex distance (the Euclidean distance in Monge’s initial problem statement).

The functions and represent the height of the current landscape and the height of the target landscape respectively (symbolized as gray levels in Figure 3). The problem consists in finding (if it exists) a function from to that transforms the current landscape into the desired one , while minimizing the product of the amount of transported earth with the distance to which it was transported. Clearly, the amount of earth is conserved during transport, thus the total quantity of earth should be the same in the source and target landscapes (the integrals of and over should coincide). This global matter conservation constraint needs to be completed with a local one. The local matter conservation constraint enforces that in the target landscape, the quantity of earth received in any subset of corresponds to what was transported here, that is the quantity of earth initially present in the pre-image of under . Without this constraint, one could locally create matter in some places and annihilate matter in other places in a counterbalancing way. A map that satisfies the local mass conservation constraint is called a transport map.

### 2.2 Monge’s problem with measures

We now suppose that instead of a “target landscape”, we wish to transport earth (or a resource) towards a set of points (that will be denoted by for now on), that represent for instance a set of factories that exploit a resource, see Figure 4. Each factory wishes to receive a certain quantity of resource (depending for instance of the number of potential customers around the factory). Thus, the function that represents the “target landscape” is replaced with a function on a finite set of points. However, if a function is zero everywhere except on a finite set of points, then its integral over is also zero. This is a problem, because for instance one cannot properly express the mass conservation constraint. For this reason, the notion of function is not rich enough for representing this configuration. One can use instead measures (more on this below), and associate with each factory a Dirac mass weighted by the quantity of resource to be transported to the factory.

From now on, we will use measures and to represent the “current landscape” and the “target landscape”. These measures are supported by sets and , that may be different sets (in the present example, is a subset of and is a discrete set of points). Using measures instead of function not only makes it possible to study our “transport to discrete set of factories” problem, but also it can be used to formalize computer objects (meshes) and directly leads to a computational algorithm. This algorithm is very elegant because it is a verbatim computer translation of the mathematical theory (see §7.6). In this particular setting, translating from the mathematical language to the algorithmic setting does not require to make any approximation. This is made possible by the generality of the notion of measure.

The reader who wishes to learn more on measure theory may refer to the textbook [Tao11]. To keep the length of this article reasonable, we will not give here the formal definition of a measure. In our context, one can think of a measure as a “function” that can be only queried using integrals and that can be “concentrated” on very small sets (points). The following table can be used to intuitively translate from the “language of functions” to the “language of measures” :

 Function uMeasure μ ∫Bu(x)dxμ(B) or ∫Bdμ∫Bf(x)u(x)dx∫Bf(x)dμu(x)N/A

(Note: in contrast with functions, measures cannot be evaluated at a point, they can be only integrated over domains).

In its version with measures, Monge’s problem can be stated as follows:

 infT:X→Y∫Xc(x,T(x))dμ  subject to  ν=T♯μ (M)

where and are Borel sets (that is, sets that can be measured), and are two measures on and respectively such that and is a convex distance. The constraint , that reads “ pushes onto ” corresponds to the local mass conservation constraint. Given a measure on and a map from to , the measure on , called “the pushforward of by ”, is such that for all Borel set . Thus, the local mass conservation constraint means that for all Borel set .

The local mass conservation constraint makes the problem very difficult: imagine now that you want to implement a computer program that enforces it: the constraint concerns all the subsets of . Could you imagine an algorithm that just tests whether a given map satisfies it ? What about enforcing it ? We will see below a series of transformations of the initial problem into equivalent problems, where the constraint becomes linear. We will finally end up with a simple convex optimization problem, that can be solved numerically using classical methods.

Before then, let us get back to examine the original problem. The local mass conservation constraint is not the only difficulty: the functional optimized by Monge’s problem is non-symmetric, and this causes additional difficulties when studying the existence of solutions for problem (M). The problem is not symmetric because needs to be a map, therefore the source and target landscape do not play the same role. Thus, it is possible to merge earth (if for two different points and ), but it is not possible to split earth (for that, we would need a “map” that could send the same point to two different points and ). The problem is illustrated in Figure 5: suppose that you want to compute the optimal transport between a segment (that symbolizes a “wall of earth”) and two parallel segments and (that symbolize two “trenches” with a depth that correspond to half the height of the wall of earth). Now we want to transport the wall of earth to the trenches, to make the landscape flat. To do so, it is possible to decompose into segments of length , sent alternatively towards and (Figure 5 on the left). For any length , it is always possible to find a better map , that is a lower value of the functional in (M), by subdividing into smaller segments (Figure 5 on the right). The best way to proceed consists in sending from each point of half the earth to and half the earth to , which cannot be represented by a map. Thus, the best solution of problem (M) is not a map. In a more general setting, this problem appears each time the source measure has mass concentrated on a manifold of dimension [McC95] (like the segment in the present example).

## 3 Kantorovich’s relaxed problem

To overcome this difficulty, Kantorovich stated a problem with a larger space of solutions, that is, a relaxation of problem (M), where mass can be both split and merged. The idea consists in solving for the “graph of ” instead of . One may think of the graph of as a function defined on that indicates for each couple of points how much matter goes from to . However, once again, we cannot use standard functions to represent the graph of : if you think about the graph of a univariate function , it is defined on but concentrated on a curve. For this reason, as in our previous example with factories, one needs to use measures. Thus, we are now looking for a measure supported by the product space . The relaxed problem is stated as follows:

 infγ{∫X×Yc(x,y)dγ | γ≥0 and γ∈Π(μ,ν)}where: Π(μ,ν)={γ∈P(X×Y) | (PX)♯γ=μ ; (PY)♯γ=ν} (K)

where and denote the two projections and respectively.

The two measures and obtained by pushing forward by the two projections are called the marginals of . The measures in the admissible set , that is, the measures that have and as marginals, are called optimal transport plans. Let us now have a closer look at the two constraints on the marginals and that define the set of optimal transport plans . Recalling the definition of the pushforward (previous subsection), these two constraints can also be written as:

 (PX)♯γ=μ⟺∀B⊂X,∫Bdμ=∫B×Ydγ(PY)♯γ=ν⟺∀B′⊂Y,∫B′dν=∫X×B′dγ. (1)

Intuitively, the first constraint means that everything that comes from a subset of should correspond to the amount of matter (initially) contained by in the source landscape, and the second one means that everything that goes into a subset of should correspond to the (prescribed) amount of matter contained by in the target landscape .

We now examine the relation between the relaxed problem (K) and the initial problem (M). One can easily check that among the optimal transport plans, those with the form correspond to a transport map :

###### Observation 1.

If is a transport plan, then pushes onto .

###### Proof.

is in , thus , or , and finally . ∎

We can now observe that if a transport plan has the form , then problem (K) becomes:

 min⎧⎪⎨⎪⎩∫X×Yc(x,y)d((Id×T)♯μ)⎫⎪⎬⎪⎭=min⎧⎪⎨⎪⎩∫Xc(x,T(x))dμ⎞⎟⎠

(one retrieves the initial Monge problem).

To further help grasping an intuition of this notion of transport plan, we show four 1D examples in Figure 6 (the transport plan is then ). Intuitively, the transport plan may be thought of as a “table” indexed by and that indicates the quantity of matter transported from to . More exactly333We recall that one cannot evaluate a measure at a point , we can just compute integrals with ., the measure is non-zero on subsets of that contain points such that some matter is transported from to . Whenever derives from a transport map , that is if has the form , then we can consider as the “graph of ” like in the first two examples of Figure 6 (A) and (B)444Note that the measure is supposed to be absolutely continuous with respect to the Lebesgue measure. This is required, because for instance in example (B) of Figure 6, the transport map is undefined at the center of the segment. The absolute continuity requirement allows one to remove from any subset with zero measure. in Figure 6.

The transport plans of the two other examples (C) and (D) have no associated transport map, because they split Dirac masses. The transport plan associated with Figure 5 has the same nature (but this time in ). It cannot be written with the form because it splits the mass concentrated in into and .

Now the theoretical questions are:

• when does an optimal transport plan exist ?

• when does it admit an associated optimal transport map ?

A standard approach to tackle this type of existence problem is to find a certain regularity both in the functional and in the space of the admissible transport plans, that is, proving that the functional is sufficiently “smooth” and finding a compact set of admissible transport plans. Since the set of admissible transport plans contains at least the product measure , it is non-empty, and existence can be proven thanks to a topological argument that exploits the regularity of the functional and the compactness of the set. Once the existence of a transport plan is established, the other question concerns the existence of an associated transport map. Unfortunately, problem (K) does not directly show the structure required by this reasoning path. However, one can observe that (K) is a linear optimization problem subject to linear constraints. This suggests using certain tools, such as the dual formulation, that was also developed by Kantorovich. With this dual formulation, it is possible to exhibit an interesting structure of the problem, that can be used to answer both questions (existence of a transport plan, and existence of an associated transport map).

## 4 Kantorovich’s dual problem

Kantorovich’s duality applies to problem (K), in its most general form (with measures). To facilitate understanding, we will consider instead a discrete version of problem (K), where the involved entities are vectors and matrices (instead of measures and operators). This makes it easy to better discern the structure of (K), that is the linear nature of both the functional and the constraints. This also makes it easier for the reader to understand how to construct the dual by manipulating simpler objects (matrices and vectors), however the structure of the reasoning is the same in the general case.

### 4.1 The discrete Kantorovich problem

In Figure 7, we show a discrete version of the 1D transport between two segments of Figure 6. The measures and are replaced with vectors and . The transport plan becomes a set of coefficients . Each coefficient indicates the quantity of matter that will be transported from to . The discrete Kantorovich problem can be written as follows:

 (2)

where is the vector of with all coefficients (that is, the matrix “unrolled” into a vector), and the vector of with the coefficients indicating the transport cost between point and point (for instance, the Euclidean cost). The objective function is simply the dot product, denoted by , of the cost vector and the vector . The objective function is linear in . The constraints on the marginals (1) impose in this discrete version that the sums of the coefficients over the columns correspond to the coefficients (Figure 7-B) and the sums over the rows correspond to the coefficients (Figure 7-C). Intuitively, everything that leaves point should correspond to , that is the quantity of matter initially present in in the source landscape, and everything that arrives at a point should correspond to , that is the quantity of matter desired at in the target landscape. As one can easily notice, in this form, both constraints are linear in . They can be written with two matrices and , of dimensions and respectively.

### 4.2 Constructing the Kantorovich dual in the discrete setting

We introduce, arbitrarily for now, the following function defined by:

 L(φ,ψ)=−<φ,Pxγ−U>−<ψ,Pyγ−V>

that takes as arguments two vectors, in and in . The function is constructed from the objective function from which we subtracted the dot products of and with the vectors that correspond to the degree of violation of the constraints. One can observe that:

 supφ,ψ[L(φ,ψ)]= if Pxγ=U and Pyγ=V=+∞ otherwise.

Indeed, if for instance a component of is non-zero, one can make arbitrarily large by suitably choosing the associated coefficient .

Now we consider:

 infγ≥0[supφ,ψ[L(φ,ψ)]]=infγ≥0Pxγ=UPyγ=V[].

There is equality, because to minimize , has no other choice than satisfying the constraints (see the previous observation). Thus, we obtain a new expression (left-hand side) of the discrete Kantorovich problem (right-hand side). We now further examine it, and replace by its expression:

 infγ≥0[supφ,ψ(−<φ,Pxγ−U>−<ψ,Pyγ−V>)] (3) = supφ,ψ[infγ≥0(−<φ,Pxγ−U>−<ψ,Pyγ−V>)] (4) = supφ,ψ[infγ≥0(<γ,C−Pxtφ−Pytψ>+<φ,U>+<ψ,V>)] (5) = supφ,ψPxtφ+Pytψ≤C[<φ,U>+<ψ,V>]. (6)

The first step (4) consists in exchanging the “” and “”. Then we rearrange the terms (5). By reinterpreting this equation as a constrained optimization problem (similarly to what we did in the previous paragraph), we finally obtain the constrained optimization problem in (6). In the constraint , the inequality is to be considered componentwise. Finally, the problem (6) can be rewritten as:

 supφ,ψ[<φ,U>+<ψ,V>] subject to φi+ψj≤cij,∀i,j. (7)

As compared to the primal problem (2) that depends on variables (all the coefficients of the optimal transport plan for all couples of points ), this dual problem depends on variables (the components and attached to the source points and target points). We will see later how to further reduce the number of variables, but before then, we go back to the general continuous setting (that is, with functions, measures and operators).

### 4.3 The Kantorovich dual in the continuous setting

The same reasoning path can be applied to the continuous Kantorovich problem (K), leading to the following problem (DK):

 (DK)supφ,ψ[∫Xφdμ+∫Yψdν] subject to: φ(x)+ψ(y)≤c(x,y)∀(x,y)∈X×Y, (8)

where and are now functions defined on and 555The functions and need to be taken in and . The proof of the equivalence with problem (K) requires more precautions than in the discrete case, in particular step (4) (exchanging sup and inf), that uses a result of convex analysis (due to Rockafellar), see [Vil09] chapter 5..

The classical image that gives an intuitive meaning to this dual problem is to consider that instead of transporting earth by ourselves, we are now hiring a company that will do the work on our behalf. The company has a special way of determining the price: the function corresponds to what they charge for loading earth at , and corresponds to what they charge for unloading earth at . The company aims at maximizing its profit (this is why the dual problem is a “sup” rather than an “inf)”, but it cannot charge more than what it would cost us if we were doing the work by ourselves (hence the constraint).

The existence of solutions for (DK) remains difficult to study, because the set of functions that satisfy the constraint is not compact. However, it is possible to reveal more structure of the problem, by introducing the notion of c-transform, that makes it possible to exhibit a set of admissible functions with sufficient regularity:

###### Definition 1.
• For any function on with values in and not identically , we define its -transform by

 fc(x)=infy∈Y[c(x,y)−f(y)],x∈X.
• If a function is such that there exists a function such that , then is said to be -concave;

• denotes the set of c-concave functions on .

We now show two properties of (DK) that will allow us to restrict the problem to the class of -concave functions to search for and :

###### Observation 2.

If the pair is admissible for (DK), then the pair is admissible as well.

###### Proof.
 {∀(x,y)∈X×Y,φ(x)+ψ(y)≤c(x,y)ψc(x)=infy∈Y[c(x,y)−ψ(y)]ψc(x)+ψ(y)=infy′∈Y[c(x,y′)−ψ(y′)]+ψ(y)≤(c(x,y)−ψ(y))+ψ(y)≤c(x,y).

###### Observation 3.

If the pair is admissible for (DK), then one obtains a better pair by replacing with :

###### Proof.
 ψc(x)=infy∈Y[c(x,y)−ψ(y)]∀y∈Y,φ(x)≤c(x,y)−ψ(y)}⇒ψc(x)≤φ(x).

In terms of the previous intuitive image, this means that by replacing with , the company can charge more while the price remains acceptable for the client (that is, the constraint is satisfied). Thus, we have:

 inf(K)=supφ∈Ψc(X)∫Xφ dμ+∫Yφc dν=supψ∈Ψc(Y)∫Xψc dμ+∫Yψ dν

We do not give here the detailed proof for existence. The reader is referred to [Vil09], Chapter 4. The idea is that we are now in a much better situation, since the set of admissible functions is compact666Provided that the value of is fixed at a point of in order to suppress invariance with respect to adding a constant to ..

The optimal value gives an interesting information, that is the minimum cost of transforming into . This can be used to define a distance between distributions, and also gives a way to compare different distributions, which is of practical interest for some applications.

## 5 From the Kantorovich dual to the optimal transport map

### 5.1 The c-superdifferential

Suppose now that in addition to the optimal cost you want to know the associated way to transform into , in other words, when it exists, the map from to which associated transport plan minimizes the functional of the Monge problem. A result characterizes the support of , that is the subset of the pairs of points connected by the transport plan:

###### Theorem 1.

Let a -concave function. For all , we have

 ∇φ(x)−∇xc(x,y)=0,

where denotes the so-called -superdifferential of .

###### Proof.

See [Vil09] chapters 9 and 10. ∎

In order to give an idea of the relation between the -superdifferential and the associated transport map , we present below a heuristic argument: consider a point in the -superdifferential , then for all we have

 c(x,y)−φ(x)≤c(z,y)−φ(z). (9)

Now, by using (9), we can compute the derivative at with respect to an arbitrary direction

 limt→0+φ(x+tw)−φ(x)t≤limt→0+c(x+tw,y)−c(x,y)t

and we obtain . We can do the same derivation along direction instead of , and then we get , .

In the particular case of the cost, that is with , this relation becomes , thus, when the optimal transport map exists, it is given by

 T(x)=x−∇φ(x)=∇(∥x∥2/2−φ(x)).

Not only this gives an expression of in function of , which is of high interest to us if we want to compute the transport explicitly. In addition, this makes it possible to characterize as the gradient of a convex function (see also Brenier’s polar factorization theorem [Bre91]). This convexity property is interesting, because it means that two “transported particles” et will never collide. We now see how to prove these two assertions ( gradient of a convex function and absence of collision) in the case of the transport (with ).

###### Observation 4.

If = and , then is a convex function (it is an equivalence if , see [San15]).

###### Proof.
 φ(x)=ψc(x)=infy[∥x−y∥22−ψ(y)]=infy[∥x∥22−x⋅y+∥y∥22−ψ(y)].

Then,

 −¯φ(x)=φ(x)−∥x∥22=infy[−x⋅y+(∥y∥22−ψ(y))].

Or equivalently,

 ¯φ(x)=supy[x⋅y−(∥y∥22−ψ(y))].

The function is affine in , therefore the graph of is the upper envelope of a family of hyperplanes, therefore is a convex function (see Figure 8). ∎

###### Observation 5.

We now consider the trajectories of two particles parameterized by , and . If and then there is no collision between the two particles.

###### Proof.

By contradiction, suppose there is a collision, that is there exists and such that

 (1−t)x1+tT(x1)=(1−t)x2+tT(x2).

Since , we can rewrite the last equality as

 (1−t)(x1−x2)+t(∇¯φ(x1)−∇¯φ(x2))=0.

Therefore,

 (1−t)∥x1−x2∥2+t(∇¯φ(x1)−∇¯φ(x2))⋅(x1−x2)=0.

The last step leads to a contradiction, between the left-hand side is the sum of two strictly positive numbers (recalling the definition of the convexity of : )888Note that even if there is no collision, the trajectories can cross, that is for some (see example in [Vil09]). If the cost is the Euclidean distance (instead of squared Euclidean distance), the non-intersection property is stronger and trajectories cannot cross. This comes at the expense of losing the uniqueness of the optimal transport plan[Vil09].

## 6 Continuous, discrete and semi-discrete transport

The properties that we have presented in the previous sections are true for any couple of source and target measures and , that can derive from continuous functions or that can be discrete empirical measures (sum of Dirac masses). Figure 9 presents three configurations that are interesting to study. These configurations have specific properties, that lead to different algorithms for computing the transport. We give here some indications and references concerning the continuous continuous and discrete discrete cases. Then we will develop the continuous discrete case with more details in the next section.

### 6.1 The continuous → continuous case and Monge-Ampère equation

We recall that when the optimal transport map exists, in the case of the cost (that is, ), it can be deduced from the function by using the relation . The change of variable formula for integration over a subset of can be written as:

 ∀B⊂X,∫B1dμ=μ(B)=ν(T(B))=∫B|detJT(x)|dμ (10)

where denotes the Jacobian matrix of and denotes the determinant.

If and have densities and respectively, that is and , then one can (formally) consider (10) pointwise in :

 ∀x∈X, u(x)=|detJT(x)|v(T(x)). (11)

By injecting and into (11), one obtains:

 ∀x∈X, u(x)=|detH¯φ(x)|v(∇¯φ(x)), (12)

where denotes the Hessian matrix of . Equation (12) is known ad the Monge-Ampère equation. It is a highly non-linear equation, and its solutions when they exist often present singularities999 This is similar to the eikonal equation, which solution corresponds to the distance field, that has a singularity on the medial axis.. Note that the derivation above is purely formal, and that studying the solutions of the Monge-Ampère equation require using more sophisticated tools. In particular, it is possible to define several types of weak solutions (viscosity solutions, solution in the sense of Brenier, solutions in the sense of Alexandrov …). Several algorithms to compute numerical solutions of the Monge-Ampère equations were proposed. As such, see for instance the Benamou-Brenier algorithm [BB00], that uses a dynamic formulation inspired by fluid dynamics (incompressible Euler equation with specific boundary conditions). See also [PPO14].

### 6.2 The discrete → discrete case

If is the sum of Dirac masses and the sum of Dirac masses, then the problem boils down to finding the coefficients that give for each pair of points of the source space and of the target space the quantity of matter transported from to . This corresponds to the transport plan in the discrete Kantorovich problem that we have seen previously §4. This type of problem (referred to as an assignment problem) can be solved by different methods of linear programming [BDM09]. These method can be dramatically accelerated by adding a regularization term, that can be interpreted as the entropy of the transport plan [Leo13]. This regularized version of optimal transport can be solved by highly efficient numerical algorithms [Cut13].

### 6.3 The continuous → discrete case

This configuration, also called semi-discrete, corresponds to a continuous function transported to a sum of Dirac masses (see the examples of c-concave functions in [GM96]). This correspond to our example with factories that consume a resource, in §2.2. Semi-discrete transport has interesting connections with some notions of computational geometry and some specific sets of convex polyhedra that were studied by Alexandrov [Ale05] and later by Aurenhammer, Hoffman and Aranov [AHA92]. The next section is devoted to this configuration.

## 7 Semi-discrete transport

We now suppose that the source measure is continuous, and that the target measure is a sum of Dirac masses. A practical example of this type of configuration corresponds to a resource which available quantity is represented by a function . The resource is collected by a set of factories, as shown in Figure 10. Each factory is supposed to collect a certain prescribed quantity of resource . Clearly, the sum of all prescriptions corresponds to the total quantity of available resource ().

### 7.1 Semi-discrete Monge problem

In this specific configuration, the Monge problem becomes:

 infT:X→Y∫Xc(x,T(x))u(x)dx, subject to∫T−1(yj)u(x)dx=νj,∀j.

A transport map associates with each point of one of the points . Thus, it is possible to partition , by associating to each the region that contains all the points transported towards by . The constraint imposes that the quantity of collected resource over each region corresponds to the prescribed quantity .

Let us now examine the form of the dual Kantorovich problem. In terms of measure, the source measure has a density , and the target measure is a sum of Dirac masses , supported by the set of points . We recall that in its general form, the dual Kantorovich problem is written as follows:

 supψ∈Ψc(Y)[∫Xψc(x)dμ+∫Yψ(y)dν]. (13)

In our semi-discrete case, the functional becomes a function of variables, with the following form:

 F(ψ) = F(ψ1,ψ2,…ψn) (14) = ∫Xψc(x)u(x)dx+n∑j=1ψjνj (15) = ∫Xinfyj∈Y[c(x,yj)−ψj]u(x)dx+n∑j=1ψjνj (16) = n∑j=1∫Lagcψ(yj)(c(x,yj)−ψj)u(x)dx+n∑j=1ψjνj. (17)

The first step (15) takes into account the nature of the measures and . In particular, one can notice that the measure is completely defined by the scalars associated with the points , and the function is defined by the scalars that correspond to its value at each point . The integral becomes the dot product . Thus, the functional that corresponds to the dual Kantorovich problem becomes a function that depends on variables (the ). Let us now replace the c-conjugate with its expression, which gives (16). The integral in the left term can be reorganized, by grouping the points of for which the same point minimizes , which gives (17), where the Laguerre cell is defined by:

 Lagcψ(yj)={x∈X|c(x,yj)−ψj≤c(x,yk)−ψk, ∀k≠j}.

The Laguerre diagram, formed by the union of the Laguerre cells, is a classical structure in computational geometry. In the case of the cost , it corresponds to the power diagram, that was studied by Aurenhammer at the end of the 80’s [Aur87]. One of its particularities is that the boundaries of the cells are rectilinear, making it reasonably easy to design computational algorithms to construct them.

### 7.2 Concavity of F

The objective function is a particular case of the Kantorovich dual, and naturally inherits its properties, such as its concavity (that we did not discuss yet). This property is interesting both from a theoretical point of view, to study the existence and uniqueness of solutions, and from a practical point of view, to design efficient numerical solution mechanisms. In the semi-discrete case, the concavity of is easier to prove than in the general case. We summarize here the proof by Aurenhammer et. al [AHA92], that leads to an efficient algorithm [Mér11], [Lév15], [KMT16].

###### Theorem 2.

The objective function of the semi-discrete Kantorovich dual problem (13) is concave.

###### Proof.

Consider the function defined by:

 G(A,[ψ1,…ψn])=∫X(c(x,yA(x))−ψA(x))u(x)dx, (18)

and parameterized by an assignment , that is, a function that associates with each point of the index of one of the points . If we denote , then can be also written as:

 G(A,ψ)=∑j∫A−1(j)(c(x,yj)−ψj)u(x)dx=∑j∫A−1(j)c(x,yj)u(x)dx−∑jψj∫A−1(j)u(x)dx. (19)

The first term does not depend on the , and the second one is a linear combination of the coefficients, thus, for a given fixed assignment , is an affine function of . Figure 11 depicts the appearance of the graph of for different assignment. The horizontal axis symbolizes the components of the vector (of dimension ) and the vertical axis the value of . For a given assignment , the graph of is an hyperplane (symbolized here by a straight line).

Among all the possible assignments , we distinguish that associates with a point the index of the Laguerre cell belongs to 101010 is undefined on the set of Laguerre cell boundaries, this does not cause any difficulty because this set has zero measure., that is:

 Aψ(x)=argminj[c(x,yj)−ψj].

For a fixed vector , among all the possible assignments , the assignment minimizes the value , because it minimizes the integrand pointwise (see Figure 11 on the left. Thus, since is affine with respect to , the graph of the function is the lower envelope of a family of hyperplanes (symbolized as straight lines in Figure 11 on the right), hence is a concave function. Finally, the objective function of the dual Kantorovich problem can be written as , that is, the sum of a concave function and a linear function, hence it is also a concave function. ∎

### 7.3 The semi-discrete optimal transport map

Let us now examine the -superdifferential , that is, the set of points (, ) connected by the optimal transport map. We recall that the -superdifferential can be defined alternatively as . Consider a point of that belongs to the Laguerre cell . The -superdifferential at is defined by:

 [∂cψ](x) = {yk | ψc(x)+ψk=c(x,yk)} (20) = {yk | infyl[c(x,yl)−ψl]+ψk=c(x,yk)} (21) = {yk | c(x,yj)−ψj+ψk=c(x,yk)} (22) = {yk | c(x,yj)−ψj=c(x,yk)−ψk} (23) = {yj}. (24)

In the first step (21), we replace by its definition, then we use the fact that belongs to the Laguerre cell of (22), and finally, the only point of that satisfies (23) is because we have supposed inside the Laguerre cell of .

To summarize, the optimal transport map moves each point to the point associated with the Laguerre cell that contains . The vector is the unique vector that maximizes the discrete dual Kantorovich function such that is c-concave. It is possible to show that is c-concave if and only if no Laguerre cell is empty of matter, that is the integral of is non-zero on each Laguerre cell. Indeed, we have the following results:

###### Theorem 3.

Let be a set of points. Let be any function defined on such that are not empty sets. Then is a -concave function.

###### Proof.

By definition, for

 (ψc)c(yi)=infx∈X[c(x,yi)−ψc(x)]=infx∈X[c(x,yi)−(infyj∈Y[c(x,yj)−ψ(yj)])]=infx∈X⎧⎪ ⎪⎨⎪ ⎪⎩ψ(yi), if x∈Lagcψ(yi) c(x,yi)−(<(c(x,yi)−ψ(yi))c(x,yj)−ψ(yj)), if x∈Lagcψ(yj) (j≠i) =ψ(yi).

This allows to conclude that is a -convex function (see [San15, Proposition 1.34]). ∎

Moreover, the converse of the theorem is also true:

###### Theorem 4.

Let be a set of points. Let be a -concave function defined on . Then the sets are not empty for all ,.

###### Proof.

Reasoning by contradiction, we suppose that is a -concave function and there exist such that . Then, from definition

 ∀x∈X, ∃j∈{1,…,n} % with j≠i0, and ϵj>0 such thatc(x,yi0)−ψ(yi0)≥c(x,yj)−ψ(yj)+ϵj.

We will write . Thus,

 c(xj,yi0)−(infyj∈Y[c(xj,yj)−ψ(yj)])≥c(xj,yi0)−(c(xj,yi0)−ψ(yi0)