Pre-computed Liquid Spaces with Generative Neural Networks and Optical Flow

Pre-computed Liquid Spaces with Generative Neural Networks and Optical Flow

Lukas Prantl Lukas.Prantl@in.tum.de Boris Bonev Boris.Bonev@in.tum.de  and  Nils Thuerey Technical University of MunichBoltzmannstr. 3Garching85748Germany Nils.Thuerey@in.tum.de
Abstract.

Liquids exhibit highly complex, non-linear behavior under changing simulation conditions such as user interactions. We propose a method to map this complex behavior over a parameter range onto a reduced representation based on space-time deformations. In order to represent the complexity of the full space of inputs, we use deformations from optical flow solves with an improved alignment procedure, and we leverage the power of generative neural networks to synthesize additional deformations for refinement. We introduce a novel deformation-aware loss function, which enables optimization in the highly non-linear space of multiple deformations. To demonstrate the effectiveness of our approach, we showcase the method with several complex examples in two and four dimensions. Our representation makes it possible to rapidly generate implicit surfaces of liquids, which allows us to very efficiently display the scene from any angle, and to add secondary effects such as splash and foam particles. We have implemented a mobile application with our full pipeline to demonstrate that real-time interactions with complex liquid effects are possible with our approach.

Liquid simulation, convolutional neural networks, optical flow
journal: TOGjournalvolume: 9journalnumber: 4article: 39journalyear: 2010publicationmonth: 3copyright: rightsretaineddoi: 0000001.0000001_2ccs: Computing methodologies Physical simulationccs: Computing methodologies Procedural animation
Figure 1. Our method represents a complex space of liquid behavior, such as varying liquid drops and obstacle interactions with a novel and highly reduced representation. In this way, our method yields a first approach to realize complex liquid effects on devices such as mobile phones, several examples of which can be seen above.

1. Introduction

Having the possibility to interact with physical effects allows users to intuitively experience their complexity. In interactive settings, users can ideally inspect the scene from different angles, and experiment with the physical phenomena using varying modes of interaction. This hands-on experience is difficult to convey with pre-simulated and rendered sequences. While it is possible to perform real-time simulations with enough computing power at hand [Macklin et al., 2014], many practical applications cannot rely on a dedicated GPU at their full disposal. An alternative to these direct simulations are data-driven methods that pre-compute the motion and/or the interaction space to extract a reduced representation that can be evaluated in real-time. This direction has been demonstrated for a variety of interesting effects such as detailed clothing [Kim et al., 2013], soft-tissue effects [Xu and Barbič, 2016], swirling smoke [Treuille et al., 2006], or for forces in SPH simulations [Ladicky et al., 2015].

Liquids are an especially tough candidate in this area, as the constantly changing boundary conditions at the liquid-gas interface result in a strongly varying and complex space of surface motions and configurations. A notable exception here is the StateRank approach [Stanton et al., 2014], which enables interactive liquids by carefully pre-computing a large graph of pre-rendered video clips to synthesize the desired liquid behavior. The algorithm by Ladicky et al. [2015] also successfully proposes a pre-computation scheme for liquids, which however targets the accelerations in a Lagrangian approach in order to accelerate a full dynamic simulation.

In the following, we will present a novel approach to realize interactive liquid simulations: we treat a (potentially very large) collection of input simulations as a smoothly deforming set of space-time signed-distance functions (SDF). An initial implicit surface is deformed in space and time to yield the desired behavior as specified by the initial collection of simulations. To calculate and represent the deformations efficiently, we take a two-step approach: First, we span the sides of the original collection with multiple pre-computed deformations. In a second step, we refine the resulting approximate surfaces with a generative neural network. Given the pre-computed deformations, this network is trained to represent the details within the inner space of the initial data set.

For data-driven liquids, working with implicit surfaces represents a sweet spot between the raw simulation data, which for liquids often consists of a large number of particles, and a pre-rendered video. The deformed implicit surface can be rendered very efficiently from any viewpoint, but still contains enough spatial information to, e.g., be augmented with secondary effects such as splashing droplets. At the same time, it makes our method general in the sense that we can work with implicit surfaces generated by any simulation approach.

All steps of our pipeline, from applying the pre-computed deformations to evaluating the neural network and rendering the surface, can be performed very efficiently. To demonstrate the very small computational footprint of our approach, we run our final models on a regular mobile device at interactive frame rates. Our demo application, shown in Fig. 2, can be obtained online in the Android Play store as ”Neural Liquid Drop” [Prantl et al., 2017].

The central contributions of our work are a novel deformation-aware neural network approach to efficiently represent large collections of space-time surfaces, an improved deformation alignment step, in conjunction with a method for long-range correspondences in order compute more accurate pre-computed deformations, and a real-time synthesis framework for mobile devices. Our contributions make it possible to generate liquid animations several orders of magnitude faster than with a regular simulator. We achieve effective speed up factors of more than 2000, as we will outline in Sec. 7.

Figure 2. A user placing drops in our mobile implementation [2017].

2. Related Work

Representing complex physics with reduced models has a long and successful history in computer graphics. Initially, works focused particularly on deformable models, e.g., demonstrating fast simulations with modal dynamics [Pentland and Williams, 1989] or using tabulation techniques for deformable scenes [James and Fatahalian, 2003] (capturing illumination effects at the same time). Subsequent works targeted more expressive and general constitutive models [Barbič and James, 2005] and related effects such as sound generation [James et al., 2006]. More recently, researchers have also demonstrated flexible character animations with reduced models [Kim and James, 2012], even including interactions with collisions not captured in the pre-computed basis [Teng et al., 2015] or coupling Lagrangian and Eulerian reduced representations [Teng et al., 2016]. Pre-computed models were shown to be especially useful in constrained and repetitive settings, e.g. for secondary soft-tissue effects [Xu and Barbič, 2016]. In the following, we will focus on fluid effects, for which the commonly used physical model is given by the Navier-Stokes (NS) equations: , in conjunction with the constraint . Here is the flow velocity, the pressure, and denotes the external body forces. parametrizes viscosity, and is typically set to zero for graphics applications, yielding the incompressible Euler equations. Model reduction techniques have also been applied successfully to single-phase flows, i.e. wind and smoke effects [Treuille et al., 2006]. This approach was generalized to work with modular bases [Wicke et al., 2009], to include rendering effects [Gupta and Narasimhan, 2007], and to accurately model the semi-Lagrangian transport commonly employed in graphics solvers [Kim and Delaney, 2013]. Other researchers have explored specialized basis constructions [De Witt et al., 2012], and compression techniques [Jones et al., 2016]. The gradient calculation of our learning approach also shares similarities with the adjoint method for fluid flow [McNamara et al., 2004], where gradients of the advection operator likewise play an important role. There, the optimization typically works with sequences of three-dimensional, divergence free motions that contain comparably small motions.

While suitable for single phase flows, the aforementioned techniques become problematic for free-surface flows. In this setting, the commonly used free surface boundary conditions strongly influence the dynamics of the liquid phase and require prohibitively large numbers of basis functions to be captured with sufficient detail. The StateRank approach proposed by Stanton et al. [2014] takes a different viewpoint to make pre-computations for liquid simulations possible. It pre-computes a state graph based on user interactions to capture the liquid, and refines the graph at runtime. In contrast to our approach, the data in this state graph is static. While temporal transitions employ blending, all other pre-rendered video sequences are used without modification. On the other hand, our approach uses a single data point, which is then transformed smoothly into a desired space-time motion. As such, we believe our contributions could complement a state graph approach to significantly reduce its size.

The non-linear model we employ for our reduced representations is centered around concatenated deformations. A central building block for our approach is the calculation of deformation fields from surface correspondences established by close proximity. Previous works employed non-rigid iterative closest point methods for Lagrangian surfaces [Raveendran et al., 2014], and variational formulations for Eulerian data sets [Thuerey, 2017]. We will likewise take an Eulerian viewpoint and employ the latter method. However, we propose two novel modifications when computing and applying Eulerian deformations to increase accuracy and robustness. What sets our method apart from these two previous approaches is that both of them focused on computing single deformations, instead of dealing with optimization problems that target a whole spaces spanned by deformations. These previous approaches always work with two single data points, source and target, while our approach considers the whole collection of target data sets at once. It is worth pointing out that the deformations typically have long range, vary strongly in space and time and that applying them is a non-linear operation. While this makes our representation very small and expressive, the high degree of non-linearity makes them difficult to handle in optimization frameworks. This is one of the central issues we will address in Sec. 4.2.

There are various methods in graphics that have targeted real-time simulations with specialized techniques. Especially for larger open water scenes, reducing the dimensionality with two-dimensional models allows for very fast simulations [Kass and Miller, 1990; Wang et al., 2007]. Interesting variations include erosion effects [Stava et al., 2008] or incorporate three-dimensional effects near the surface [Chentanez and Müller, 2011]. Particle-based liquid simulations are also highly interesting for fast simulations [Solenthaler and Pajarola, 2009; Macklin and Müller, 2013] Likewise, many recent techniques focus on novel representations to allow for fast simulations, e.g. to represent highly detailed surfaces with meshes [Chentanez et al., 2015] or to represent detailed cut surfaces [Manteaux et al., 2015]. We employ a typical NS solver from the graphics area [Stam, 1999; Enright et al., 2003], using ghost-fluid boundaries [Enright et al., 2003], and the FLuid-Implicit Particle method (FLIP) [Zhu and Bridson, 2005]. However, our method is not restricted to inputs from this particular choice of solver. Liquid surfaces from particle based simulators [Ihmsen et al., 2014] or any other type of physical simulator would likewise be suitable inputs, as long as an implicit representation for each time step can be calculated.

While the fundamentals of machine learning have been established at the advent of computer technology, the field of machine learning with neural networks (NNs) has recently experienced a steep increase in interest due to technological advances [Werbos, 1974; Rumelhart et al., 1988]. Impressive results have been achieved in many disciplines of computer vision, such as image classification [Krizhevsky et al., 2012] and object detection and segmentation [Girshick et al., 2014]. In the field of computer graphics, researchers have also employed neural networks for diverse tasks such as robust human correspondences from depth maps [Wei et al., 2015], visual similarity for product design [Bell and Bala, 2015], automatic image colorization [Iizuka et al., 2016] and photo retrieval from sketches [Sangkloy et al., 2016]. The general architecture of our networks follows other generative approaches [Goodfellow et al., 2014], with the central difference that we focus on learning deformations, and do not employ an adversarial network architecture.

Closer to our area of application, researchers have also investigated computing image deformations, i.e. solving the optical flow problem, with neural networks. While many works target localized regions and feature descriptors to retrieve depth and correspondences [Bailer et al., 2016], several works have investigated retrieving dense image-space deformations. The so called FlowNet architecture has proven to be efficient and accurate [Dosovitskiy et al., 2015] and has been extended to compute large-range motions with the help of multi-scale approaches [Ranjan and Black, 2016; Ilg et al., 2016]. These papers require explicit annotations with deformation data during the learning phase. In contrast, our networks learn to deform the data given only a target data set. This is crucial for applications where reference motions are not readily available for all inputs.

Despite the success stories in many areas of computer science, machine learning techniques have rarely been applied in the area of physically-based animations. Specifically for liquid simulations, a recent approach was proposed to efficiently model response forces for particle-based simulations using regression forests [Ladicky et al., 2015]. Only few papers exist up to now employing neural networks for Eulerian simulations: one work aims for learning localized, divergence-free velocity updates [Yang et al., 2016], while another work learns to remove divergent motions for whole grids using convolutional neural networks [Tompson et al., 2016]. In addition, neural networks have recently been used to learn the similarity of coarse and fine smoke simulations [Chu and Thuerey, 2017], and to generate small-scale splash effects [Um et al., 2017]. All of these works employ machine learning to speed up or augment direct simulations, while we aim to represent clearly defined regions of liquid behavior with our approach. We will employ convolutional neural networks in the following, as their generative capabilities were shown to be powerful tools [Goodfellow et al., 2014]. After spanning the sides of a chosen region with pre-computed long range deformations, our networks learn to generate deformations to respect the content of these regions.

Figure 3. Numerical inaccuracies over time lead to unpredictable changes, even when the input parameters only change very slightly, as can be seen for the right sheet in each image.

3. Problem Description

Given a Navier-Stokes boundary value problem with a liquid-gas interface, we consider the interface over time as a single space-time surface. We start with a set of these space-time surfaces, which we will call , defined over a convex region of our simulation parameters . We assume all parameter dimensions to be normalized, i.e. . In practice, could be any parameter of the simulation, e.g. initial positions, velocity boundary conditions, or physical parameters such as viscosity. Looking at this set of surfaces as a whole, represents the complex manifold that we want to capture and represent. We choose implicit functions to represent specific instances of the manifold, such that . In the following, and will denote four-dimensional signed distance functions, where the first three dimensions () are space, while the fourth dimension () represents time. We will abbreviate with in the following to simplify notation, and to indicate that this set of surfaces represents a collection of constant reference surfaces in our setting.

Representing the whole set is a challenging task. In theory, the original surfaces of are continuous, and by choosing the implicit representation , we effectively regularize the space of inputs. Due to bifurcations and noise-like artifacts from discretization errors (as shown in Fig. 3) there can, however, be very significant differences even for small changes of . Despite the complex changes in the data, our goal is to find a manageable and reduced representation. We use our prior knowledge of the problem, and embed into a space spanned by warping a single input surface with a set of deformation fields. Thus, we generate an approximation of by applying a space-time deformation to an initial surface , where again depends on the simulation parameters . In our case is constructed from a set of auxiliary deformation fields , where . Note that even the simplest choice for with a single deformation, i.e. , is already non-linear. For now we will assume that all functions are continuous, and sufficiently differentiable. We will later on discretize all functions with Cartesian grids and employ linear interpolation, although other Eulerian representations and higher order basis functions could likewise be used.

We have two goals when calculating the reduced representation: the first goal is staying true to the original simulations by minimizing , where we used the norm to measure the distance between the two implicit functions. Although we will present our algorithm with the use of said norm, the concepts described here generalize well and can be adapted to other metrics. As a secondary goal, we would like to minimize the function above with a small number of deformation fields . The size of this set111Note that the set of deformations does not represent a basis, as the application of a single, arbitrary deformation is already a non-linear process. not only directly represents the amount of data we have to store, but also the amount of computations we have to perform to generate an output. We will focus on single sets of N deformations in the following, but naturally multiple sets for adjacent parameter regions could be combined to cover even larger spaces of behavior.

For our method, we span the outer sides of the simplex with pre-computed end-point deformations, and then refine the insides using a generative neural network with a deformation-aware loss function. As the end-point deformations only consider source and target, they do not necessarily represent the space between the two extremes well. This region is where our generative neural network will be active to refine the surfaces produced by the end-point deformations. In the following, we will use an optical flow solver from previous work [2017] as a starting point to compute the end-point deformations . However, our approach is not limited to inputs from optical flow. Any algorithm that computes deformations between two implicit surfaces could be used for this pre-processing stage [Solomon et al., 2015]. As we will point out below, we require these deformations to be smooth. This is usually directly enforced with corresponding regularization when computing the end-point deformations.

In line with previous work [Raveendran et al., 2014; Thuerey, 2017] we concatenate spatial surfaces as slices of each 4D volume, and then generate a 4D SDF from this initial surface. Without loss of generality, we assume similar resolutions in space in time in the following. Otherwise, this straight forward concatenation could lead to a space vs. time distance bias in the volumes, which however could be corrected for with a suitable scaled distance metric, if necessary.

3.1. Deformation Alignment

As outlined above, will denote the reference signed distance functions (SDFs) of our input parameter space, while will denote instances of a single input surface deformed by our algorithm. We will denote the single initial surface with in the following. Here, we typically use the zero point of our parameter space, i.e., , with . Hence, we aim for deforming such that it matches all instances of as closely as possible. Before we formalize this ansatz in terms of a loss function in the next section, we will explain our approach for applying the set of deformations.

For the end-point deformations, it is our goal to only use a single deformation for each dimension of the parameter space . Thus will correspond to , and we can apply to compute a deformation for an intermediate point along this dimension. E.g., the deformed initial surface for a chosen point along the first dimension can be computed with .

Given the sequence of end-point deformations and a point in parameter space a straight-forward approach is to apply each deformation sequentially:

(1)

However, there are two disadvantages to this approach. The main problem is that the deformations are only meaningful if applied with (see Fig. 4).

Setup
Figure 4. Improved alignment: In row 1) the first two figures in red show two end-point deformations (red arrows). The first one () moves a drop to the right, while changes its size once it has reached its final position. Images with deformations show the source in dark blue, and the deformed surface in light blue. In this example, the two deformations should be combined such that the horizontal position and the drop size can be independently controlled by changing and . E.g., on the top right, a correct solution for is shown. Row 2) shows how these deformations are applied in previous work: The second deformation acts on wrong parts of the surface, as the drop has not reached its left-most position for . The result is a partially deformed drop, shown again in the middle row on the right. Row 3) shows our approach: In the bottom left we see the deformation field from previous work. It is also the starting point of our improved alignment, but never applied directly. Our method corrects by transforming it into , bottom center, which acts on the correct spatial locations. In this example, it means that the expanding velocities from are shifted left to correctly expand the drop based on its initial position. Our method successfully computes the intended result, as shown in the bottom right image.

This means, that if a previous deformation wasn’t applied fully with a weight of 1, each subsequent deformation will lead to an accumulation of deformation errors. The second disadvantage of this simple method is the fact that many advection steps have to be applied in order to arrive at the final deformed SDF. This also affects performance as each advection step introduces additional computations. We present an alternative approach which aligns the deformation fields to the final position of the deformation sequence in order to combine them in a meaningful manner. To do this, we introduce the intermediate deformation fields:

(2)

In this way, we align all deformation fields to the final destination of the deformation sequence. The Eulerian representation we are using means that advection steps look backward to gather data, which in our context means that we start from the last deformation to align previous deformations. Using the aligned deformation fields we can include and assemble the weighted intermediate fields

(3)

and an inversely weighted correction field

(4)

The first deformation field represents the weighted sum of all aligned deformations, weighted with the correct amount of deformation specified by the deformation parameters . The second deformation intuitively represents the offset of the deformation field from its destination caused by the weights. Therefore, we correct the position of by this offset with the help of an additional forward-advection step calculated as:

(5)

This gives us the final deformation field . It is important to note that the deformation for a position is assembled at a location that is not known a-priori. It has to be transported to with the help of . The following figure illustrates this process:

(a)

(b)

Both deformation and the correction are initially located at in (a). is applied to yield the correct deformation at location , as shown in (b).

This correction is not a regular advection step, as the deformation is being ’pushed’ from to . In order to solve this advection equation we use an inverse semi-Lagrangian step, inspired by algorithms such as the one by Lentine et al. [2011], pushing values forward with linear interpolation. As multiple values can end up in a single location, we normalize their contribution. Afterwards, we perform several iterations of a ”fill-in” step to make sure all cells in the target deformation grid receive a contribution (we simply extend and average deformation values from all initialized cells into uninitialized regions).

The deformed SDF is then calculated with a regular advection step applying the final, aligned deformation with

(6)

Based on our correction step from Eq. (5) this method now respects the case of partially applied deformations. As the deformations are already aligned and don’t depend on , we can pre-compute them. To retrieve the final result it is now sufficient to sum up all deformations in and , then apply one forward-advection step to compute , and finally deform the SDF by applying semi-Lagrange advection. While our method is identical with alignment from previous work [Thuerey, 2017] for , its importance for practical deformations with weights is illustrated in Fig. 4.

This algorithm for aligning deformations will be our starting point for generating initial deformed surfaces with end-point deformations. These deformed surfaces will then be adjusted and refined by our neural networks that we will explain in the following section.

4. Learning Deformations

In this section we will outline a deformation-aware neural network approach to improve the approximation of the initial surface set . When adjusting the composition of deformations the algorithm explained so far will yield a smooth transition, but it will typically not capture the complex, non-linear behavior that occurs when fluid simulation parameters are changed. This is not surprising, due to the fact that the end-point deformations are optimized to yield the best possible final result only for . We will show that our network can learn to adjust these deformation parameters , and additionally that it can learn to generate new deformations to improve the quality of the generated liquid surfaces.

To train our machine learning model, we propose the following objective function, which measures the similiarity of a known reference surface and the corresponding, approximated result . We introduce the numerical equivalent of the loss

(7)

which approximates the analytical loss . While this loss function looks trivial at first, note that from Eq. (6) encapsulates a series of highly non-linear deformation steps. In the following, a central challenge will be to compute reliable gradients in spite of these accumulated non-linearities.

4.1. Learning Optimal Deformation Parameters

We will first focus on adjusting the interpolation weights from parameters . We replace these simulation parameters in Eq. (6) with new parameters to be inferred by a neural network in order to minimize Eq. (7). The application of the deformations weighted by includes our alignment step from Sec. 3.1, and hence the neural networks needs to be aware of its influence.

The network is characterized by the number of layers in the neural network, by the activation function , the number of nodes in each each layer and their respective weights . We can write the result after activation of the -th node in the -th layer as a function of all the activations in the previous layer:

(8)

We use the common simplified notation to include the bias in with a constant input for each node. Through composition, we can construct the parameter function

(9)

which is a function of the input parameters and all weights . We replace all instances of in Eq. (6) with to compute a differently weighted, and aligned final deformation field .

To train the neural network, we need to specify gradients of Eq. (7) with respect to the network weights . With the chain rule we obtain . Since the derivative of the network output with respect to a specific network weight is easily calculated with backpropagation [Bishop, 2006], it is sufficient for us to specify the second term. The gradient of Eq. (7) with respect to the deformation parameter is given by

(10)

where we have inserted Eq. (6). While the second term in the sum is easily computed, we need to calculate the first term by differentiating Eq. (6) with respect to , which yields

(11)

As the gradient of is straight forward to compute, is crucial in order to compute a reliable derivative. It is important to note that even for the case of small corrections , Eq. (5) cannot be handled as another backward-advection step . While it might be tempting to assume that differentiating this advection equation will produce reasonable outcomes, it can lead to noticeable errors in the gradient. These in turn quickly lead to diverging results in the learning process, due to the non-linearity of the problem.

The correct way of deriving the change in is by taking the total derivative of with respect to :

(12)

where, denotes the Jacobian of with respect to , evaluated at . Rearranging Eq. (12) and inserting and yields

(13)
(14)

We note that the Jacobian in the equation above has small entries due to the smooth nature of the deformations . Thus, compared to the unit matrix it is small in magnitude. Note that this relationship is not yet visible in Eq. (12). We have verified in experiments that does not improve the gradient significantly, and we thus set this Jacobian to zero, arriving at

(15)

where the are the deformation fields aligned for the target configuration from Eq. (3.1). We use Eq. (15) to estimate the change in the final deformation fields for changes of the -th deformation parameter. We see that this equation has the same structure as Eq. (5). On the left-hand side, we have , evaluated at , whereas on the right-hand side is evaluated at . To calculate then, we can use the same forward-advection algorithm, which is applied to the correction in Eq. (5).

With this, we have all the necessary tools to train the neural network with back-propagation by computing the necessary terms in Eq. (10). We will demonstrate the effects of the adjusted interpolation parameters in more detail in Sec. 7.

Figure 5. Schematic of our two neural networks. While the parameter network itself is very simple, its cost functions allows it to learn how to optimally apply multiple long-range, non-linear deformation fields. The generative network on the bottom instead learns to generate whole deformation fields to refine the final surface.

4.2. Generating Deformations with Neural Networks

Our efforts so far have been centered around producing a good approximation of , with the given end-point deformations . The performance of this method is therefore inherently constrained by the amount of variation we can produce with the deformation inputs. To allow for more variation, we propose to generate an additional space-time deformation field , that changes with the simulation parameters . Once again, we model this function with a neural network, effectively giving the network more expressive capabilities to directly influence the final deformed surface. Therefore, we will now explain how to set up and train this generative neural network architecture.

We choose a network structure with a set of four-dimensional deconvolution layers. We apply the trained deformation with an additional advection step:

(16)
(17)

This way, the network only has to learn to refine the surface after applying the weighted end-point deformations, in order to accommodate the nonlinear behavior of .

The deformation field is characterized by the network structure, the network weights and the activation function . As input, we supply the network with the simulation parameters as a vector. The output of the network are four-component vectors, with the resolution . Note that in general the SDF resolution and the deformation resolution do not need to be identical. Given a fixed SDF resolution, we can use a smaller resolution for the deformation, which reduces the number of weights and computations required for training. Thus in practice, each four-dimensional vector of the deformation acts on a region of the SDF, for which we assume the deformation to be constant. Therefore, we write the deformation field as

(18)

where is the indicator function of the -th region on which the four-dimensional deformation vector acts. This vector is the -th output of the neural network.

To train the network weights , we need to calculate the gradient of the loss-function Eq. (7) with respect to the network weights. Just like in the previous section, it is sufficient to specify the gradient with respect to the network outputs , for the update of we can rely on backpropagation. Deriving Eq. (7) yields

(19)

Thus, we can calculate the derivative by summation over the region that is affected by the network output . The gradient term is first calculated by evaluating a finite difference stencil on and then advecting it with the corresponding deformation vector . The other terms in Eq. (4.2) are readily available. Alg. 1 summarizes how the training of the network weights is performed.

Data: training samples from
Result: trained network weights
for each training sample  do
       evaluate neural network to compute
       load reference SDF , initial SDF
       calculate
       = advect with
       calculate
       evaluate neural network to compute
       assemble from according to Eq. (18)
       advect with
       advect with
       for each  do
             calculate the gradient according to Eq. (4.2)
       end for
      backpropagate from Eq. (20) to adjust
end for
ALGORITHM 1 Training the network weights

Working in conjunction, our two machine learning steps capture significantly more complex behavior of the fluid space-time surface over the whole parameter domain. We first adjust the application of the initial deformations with , and then let our network generate an additional, input-dependent deformation field. We will demonstrate in Sec. 7 that learning a correction to the previously advected SDF performs significantly better than only learning the deformation from in a single step.

Figure 6. The top image illustrates the initial conditions of our two dimensional parameter space setup. It consists of a set of two-dimensional liquid simulations, which vary the position of the liquid drop along x as , and its size as . The bottom half shows the data used for training at . Note the significant amount of variance in positions of small scale features such as the thin sheets. Both images show only a subset of the whole data.

4.3. Training Details

To train both networks we apply the commonly used stochastic gradient descent with backpropagation and the ADAM optimizer [Bishop, 2006]. The outputs of the parameter network are the deformation parameters . The network structure that we use to model this is a simple network with two fully connected layers with 8 nodes each. The deformation network generates with the help of two fully-connected layers of size 16 and 2592, respectively, followed by two de-convolutional layers. In practice, we also found that a small amount of weight decay and regularization of the generated deformations can help to ensure smoothness. Thus, the loss function of the deformation network, with regularization parameters and is

(20)

We noticed that regular SDFs can lead to overly large error values far away from the surface. Thus, we apply the function to the SDF values, in order to put more emphasis on the surface region. Both network structures are illustrated in Fig. 5, and we use ReLU non-linearities, throughout.

As often the case with PDEs, special care is required for boundary conditions. In our case, this means the sides of our domain. Here, we use the common practice of Neumann boundary conditions for values outside of the computational domain. Hence, the SDF values extend to the outside with constant values. The problem with this approach is that this leads to vanishing gradients in Eq. (4.2), which in turn often leads to artificial minima or maxima in the loss function. While this could be alleviated with strong regularization, we instead perform an on-the-fly distance extrapolation. Here we use the knowledge that the outside of our domain will always contain values outside of the liquid volume. Thus, we increase the retrieved SDF values by the distance of the retrieval point to the domain boundary. In this way, can still be evaluated correctly at the boundaries.

For our implementation we use the TensorFlow framework. There we integrated our custom deformation-aware loss function (7) and a custom four-dimensional deconvolution, as four-dimensional convolutions are by default currently not supported in neural network libraries.

Figure 7. An example of our parameter learning approach. From left to right: direct application of deformation weights, reference only, and parameters learned by our NN. Linear and NN versions show the reference in light yellow in the background. Our NN adjusts the deformation parameters to better represent the left sheet.

Finally, the training is performed separately for both networks. Since the end-point deformations are applied first, we typically start by training the parameter network with ca. 1000 training steps at a learning rate of 0.001. Once this is completed, we switch to the deformation network, and train it for another ca. 9000 iterations, with the same learning rate. The full set of training parameters including regularization settings can be found in Table 2.

4.4. Evaluation

(a) Parameter Learning
(b) Deformation learning
Figure 8. Loss during training both for parameter learning and deformation learning. In yellow we show the loss for the current sample, while the dark line displays the loss evaluated on the validation set.

In order to evaluate our method, we will first use a two-dimensional parameter space with two dimensional surfaces. For this purpose we use the SDFs extracted from 2D simulations of a drop falling into a basin. As simulation parameters we choose to be the size of the drop, and to be its initial -position, as shown in Fig. 6. From this simulation we extract a single frame at , which gives us a two-dimensional parameter-space , where each instance of has a corresponding two-dimensional SDF. In order to train the networks described in section 4, we sample the parameter domain with a regular grid, which gives us 2156 training samples, of which we used 100 as a validation set.

Fig. 8 shows the validation loss and the training loss over the iterations both for parameter learning and for deformation learning. We observe that in both cases the learning process reduces the loss, and finally converges to a stable solution. This value is lower in the case of deformation training, which can be easily explained with the increased expressive capabilities of the deformation network. We verified that the solution converged by continuing training for another 36000 steps, during which the solution changed only minimally. In Fig. 7 we see the actual results of our parameter training algorithm. While the improvement is subtle, we can see that both sheets on top of the basin have a shape that is closer to the reference after adjusting the deformation parameters with the neural network.

As outlined above, it might seem attractive to use a simpler approximation for the forward advection in Eq. (10). However, due to the strong non-linearity of our setting, this is does not yield a usable algorithm, as shown in Fig. 9.

Figure 9. Training with different gradient approximations: validation loss with a simplified advection (red), and the correct gradient (green). The simplified version does not converge.
Figure 10. Different example surfaces from the 2D parameter space of Fig. 6. From left to right: surfaces from previous work, surfaces reconstructed with PCA, adjusted deformations using a NN for only parameter learning, the reference surfaces, and on the far right the output of our full method with a deformation from the NN. Note that none of the other methods is able to reconstruct both arms of liquid in the first row, as well as the left sheet in the bottom row. The reference surfaces are shown in light yellow in the background for each version.

The effect of our deformation learning approach is illustrated in Fig. 10. This figure compares our full method (on the right) with several other algorithms. As a baseline, the result from previous work is shown on the far left. This version employs optical flow with the basic alignment from previous work [2017]. As the surfaces of this example have larger distances and more complex correspondences than those from previous work, the closest-distance correspondences lead to an unsatisfactory results. A different, but popular approach for non-linear dimensionality reduction, which can be considered as an alternative to our method, is to construct a reduced basis with PCA. Using the mean surface with four eigenvectors yields a similar reduction to our method in terms of memory footprint. We additionally re-project the different reference surfaces into the reduced basis to improve the reconstruction quality of the PCA version. The result is a very smooth surface that fails to capture any details of the behavior of the parameter space, as can be seen in the second column of Fig. 10.

The middle column of this figure (in purple) shows the surfaces obtained with the learned deformation weights with our parameter NN (Fig. 5 top), but without an additional NN deformation network. As this case is based on end-point deformations, it cannot adapt to larger changes of surface structure in the middle of the domain. In contrast, using our full pipeline with an NN deformation yields surfaces that adapt to the varying behavior in the interior of the parameter space, as shown on the right side of Fig. 10. However, it is also apparent that the deformations generated by the NN do not capture every detail of the references. Our reduced representation in this case is regularized by the varying reference surfaces in small neighborhoods of , and it learns an averaged behavior from the inputs.

Figure 11. This figure illustrates improvements for our long-range deformation approach. We match a sphere above a basin at different horizontal positions. Its source position is indicated by the blue arrow and the target position with offset by the yellow arrow. We compute a deformation, and the surfaces in the background show deformed SDF surfaces for weighting factors in the zero to one range. Small distances (a,b) can be correctly handled by a single deformation. For growing distances , a single deformation step quickly merges the drop into the basin (c,d). The combined deformation of our approach (e) recovers the whole tangential motion along the surface of the basin.

5. Long-range Deformations

Our neural network can successfully refine the deformations based on an initial set of pre-computed deformations along the sides of the parameter space. Therefore, any improvement for these initial deformations typically also translates into improvements of quality for the output of our neural network. For this reason, we will propose an additional step that targets long-range correspondences, before discussing more complex four-dimensional results.

When computing the end-point deformations we have two closely related goals: to accurately match the target, and to establish correct correspondences between source and target. The algorithm we use (optical flow), and many others such as the iterative closest point methods, work by matching closest distances. For inputs with significant differences, a deterioration of both desired qualities goes hand in hand: parts of the target shape can get lost, and the likelihood of reconstructing a target feature from an arbitrary different source feature increases. We make the following observation for sets of inputs from liquid simulations: high similarity between inputs improves matching quality and correspondences, but this similarity is not proportional to distances in simulation parameter space. I.e., two inputs A and B with a distance in can be more similar than A and a third point C with a distance less than . This is a result of the high non-linearity of the input simulations. This effect is, e.g., visible in Fig. 3, where the first image is closer to the third one on the right, while it is closer to the second one in parameter space. The goal of the algorithm described in the following is to find a sequence of deformations that keep long-range correspondences by finding suitable correspondences for smaller steps. Because of the non-linearities in our data sets, this does not mean taking the smallest possible steps, but rather those that best preserve correspondences.

Therefore, given the problem to compute a deformation from source surface at simulation parameter to a surface at , we subdivide this interval by inserting additional data points. We can, e.g., use linear interpolation on the simulation parameters, and generate additional four-dimensional surfaces as data points along the line. In the following, we will denote the subdivisions of the interval with , where , and deformations for a section of this interval with (with . We typically use a fixed spacing for the additional parameter samples , but our method is not restricted to a regular sampling of the [a,b]-interval. Thus, the deformations with double indices cover a partial distance of a single dimension of our parameter space, in contrast to the final endpoint deformations, e.g., , which cover the full extent of a single dimension of the parameter space. In order to reach the target state we want to minimize the error between the final surface (computed by deforming ) and the target, i.e. , where computes the error between the two surfaces. In the following we use an indicator function for [Thuerey, 2017], but an norm of the SDF values with applied, as for our neural networks, is likewise applicable. To shorten the notation, we will also drop in the following section for subscripts, i.e., instead of , we will use .

Figure 12. A more complex surface illustrating the improvements of our long-range deformation approach. The source surface (left image) is matched with the target (far right) using a single deformation from previous work [2017] (middle left), and our approach for long-range deformations (middle right). The arrows in the two middle images indicate induced motions of the tip of the sheet (top of the dashed blue box in the left image). Our method correctly establishes correspondences between only the left arm of the source and the target sheet. The right arm of the source (dashed blue box) is correctly merged into the bulk volume in our method (dashed cyan arrow), while the single deformation erroneously merges it into the left sheet (orange arrow).

In contrast to previous work [Raveendran et al., 2014; Thuerey, 2017], we compute a single end-point deformation to retrieve with a merged sequence of carefully selected deformations that minimizes the accumulated error. Each step in the deformation sequence potentially introduces an error that is propagated towards the target. This error is not propagated additively when applying several deformations in a row. However, the additive accumulation of errors represents a practical lower bound. Consider an example deformation from interval position to via an intermediate point . For a surface deformed by two deformations as the error bound is given by

(21)

The reason is that will typically introduce imperfections, and will be a worse starting point for than . Thus, we minimize the accumulated errors along a sequence of discrete pairs , each with an associated error value, and with , . We calculate the total error for the sequence with the error metric as

(22)

Minimizing is a classic dynamic programming problem. We compute a dense directed graph , where the associated cost for each edge is the deformation error of the corresponding deformation. Thus, given a sequence of surfaces we compute deformations from every surface towards the next surfaces, adding the edge with the cost from Eq. (22) to . Theoretically, we could compute all possible deformations, but we found that works well in practice. The optimal sequence is then computed by using Dijkstra’s algorithm on , and we compute the final deformation by concatenating the deformations of with our alignment from Sec. 3.1 to yield a single end-point deformation for one dimension of the parameter space.

Examples

Fig. 11 shows an example deformation computed with this approach. In this example, the goal is to move a droplet with a distance of ca. 15 cells from the surface of a basin horizontally across its surface. We match a source surface (with the drop on the left) with a target where the drop is translated by various distances to the right. For targets with , a single regular deformation merges the drop into the basin, and re-creates the target droplet from the basin, instead of moving the droplet horizontally, as can be seen in Fig. 11c,d). Our algorithm can recover the whole translation of the droplet across the domain (Fig. 11e). While our method successfully establishes correspondences, the motion along the way is typically not perfectly rigid, and can lead to deformations of the droplet.

Figure 13. This plot shows the error values for the different intermediate steps of Fig. 11 from Eq. (22). The x-axis denotes source, the y-axis target position of the droplet. Larger error values have colors closer to yellow.

The error values from Eq. (22) for this test case are shown in Fig. 13. For this simple case, the error values change smoothly in the graph, and large distances consistently get worse. A diagonal high-error line at a distance of ca. 18 cells is clearly visible in the graph. This is the critical point where the source drop has a similar distance to the target drop as well as the basin. This typically causes the drop to break up when applying the deformation, leading to larger error measurements. Our algorithm detects the increased error, and for this case makes one large step towards this error barrier. Afterwards, it stays just below it to reach the target state. The initial position of the droplet is , and our algorithm finds the solution to combine deformations to reach the target at position .

A second example using the dataset of Fig. 6 can be seen in Fig. 12. Here the source is shown on the left, and the target surface on the right. The single deformation, in line with previous work [Thuerey, 2017], mostly restores the target shape, but erroneously merges both ”sheets” into the single wave of the target. In contrast, our method reconstructs the wave using only the left sheet of the source shape, and correctly merges the right arm into the surface of the basin. The full sequence can be seen in the accompanying video.

Note that while this approach takes into account the in-between data to improve correspondences, the resulting deformations still can not re-construct any of the details in-between the end-points. We leave it to our neural network step to correct this behavior during deformation refinement.

Figure 14. Deformation learning with a flat surface as starting point. The reference is depicted in yellow, the deformed result in blue. Although the initial SDF is just a flat, planar surface, our network learns to capture a significant portion of the large-scale motion.

6. Mobile Device Implementation

The significant amount of pre-calculations to compute our reduced model pays off especially for interactive settings. To demonstrate the high performance our representation achieves, we have implemented a proof-of-concept application for mobile devices. Our application is divided into two parts: a main loop used for the surface generation and rendering, and a second loop which handles the user interaction, and calculates a final deformation for the main loop to display. Both loops are working with separate threads and are using double-buffering to provide the results.

The rendering and deformation tasks of our pipeline are computed with OpenGL ES 3.1, making use of its compute shaders and the mobile device GPU. The neural network instead is computed on the CPU using the TensorFlow C++ library in conjunction with our custom 4D deconvolution. Here, we load a graph and a pre-trained set of weights computed during a pre-processing stage. Thus, our application only handles the evaluation of the network, but not the training. In order to guarantee temporal coherence, we temporally blend the SDFs when restarting the simulation or switching to a new deformation.

As our method produces a full implicit surface of the liquid, we implemented a three-state particle system, to model splashes, bubbles and foam floating on the surface [Ihmsen et al., 2012]. We approximate the fluid velocity using the gradients of the SDF. For bubbles and floating particles, we can again make use of the implicit representation to project them onto the liquid surface. The scene is rendered with a custom raytracer, that handles reflection, refraction, and shading with an environment map. Additionally, we use curvature information for shading, which can be conveniently computed from the grid-based SDF data our algorithm generates.

Our second thread becomes active whenever the user interacts with the application. A user input typically yields a point in our parameter space . Based on this value, we first let the neural network calculate the adjusted parameters . With these parameters, we calculate the aligned end-point deformations, and combine the resulting field with the deformation calculated by the neural network. While we typically perform two advection steps during our off-line implementation, we can use our alignment in this real-time setting to combine end-point deformation and neural network deformation into a single field. In this way, we only have to calculate a single advection step to extract the final, three-dimensional SDF surfaces from slices of the 4D buffers. We found that the fill-in step of our alignment from Sec. 3.1 had negligible effect for the relatively small deformation resolutions of our app. Thus it is not active for generating the interactive results below. Once the calculations are completed, the main loop swaps the active deformation buffer with the buffer of the newly generated one, and starts displaying the new configuration.

7. Results

Below we will demonstrate results of our algorithm for three-dimensional simulations over time. We will show that our network approach is effective at learning good approximations of the four-dimensional liquid surfaces and that we can render them very efficiently in real-time scenarios.

Liquid Drop

As our first 4D test case, we chose a drop of liquid falling into a basin. As our simulation parameters we chose the - and -coordinates of the initial drop position, as well as the size of the drop. We typically assume that the z-axis points upwards. Examples of the complex behavior are shown on the left of Fig. 1, and in the supplemental video. To generate the training data, we sample the parameter space on a regular grid, and run simulations, each with a spatial resolution of to generate a total of 1764 reference SDFs.

As a first test we demonstrate the effectiveness of the neural network deformation by showcasing it on its own. For this purpose, we conduct a test, were is simply a basin at rest, i.e., a flat surface. We retrieve the result directly by evaluating . This means we do not employ any end-point deformations for this test. As contains only a planar surface that is not moving, the neural network has to learn to represent all motions of the liquid. Fig. 14 shows several results from the space of solutions, after training for 10000 iterations. For all pairs, our network correctly approximates the large scale motion of the splashes. Considering the lack of features in this the expected result, as there are no small scale details the network can shift in space and time to represent the targets.

We now explain the setup for our full pipeline with this drop setup. Here, contains a 4D SDF of a large drop falling into the upper right corner of the basin. The still frame of Fig. 15 illustrates the gain in quality we can achieve with our neural network deformation. Also note the increase in small scale detail we achieve in comparison to the flat surface test above. Several other examples from the whole space of solutions can be found in the video. This direct comparison illustrates that our network successfully learns to correct the complex space-time surface of our inputs.

Figure 15. In yellow the reference, in purple on the left the deformed surface after applying the end-point deformations. The left and bottom sides of the splash are far from the reference surface. In cyan on the right, our final surface resulting from the NN deformation. In this case, the form of the splash closely matches the reference on all sides.
Figure 16. Several screens from a smaller volume of liquid falling down in the ”drop” test case of our mobile application.
Figure 17. Two frames generated with our approach (top) and with a direct SDF interpolation using a similar amount of overall memory (bottom). The latter looses the inital drop shape (on the right), and removes all splash detail (left). In addition, the direct SDF interpolation leads to strong ghosting artifacts with four repeated patterns.

We run the trained model for this setup in our mobile application to demonstrate the high performance of our approach. In this application the user can interact by tapping on the screen to select the drop position. Fig. 16 shows our app in action, while performance statistics can be found in Table 1. The deformation calculation step is the most time consuming one, requiring 69ms on average. The calculations for the deformations are hidden from the user with our double buffering setup. Once the deformation is calculated, rendering is very fast, yielding a frame rate of 50 fps on average.

Note that the original simulation for this setup (to generate a single input) took seconds on average with a parallel implementation using all cores of a desktop workstation. Assuming a best-case slowdown of only 4x for the mobile device, it would require more than 32 minutes to run the original simulation. Reducing the spatial and temporal resolution would result in faster runtimes, but also in a correspondingly reduced quality. Our app generates and renders a full liquid animation in less than one second in total, including network evaluation, alignment steps and rendering. Thus, compared to the ca. 32 minutes this setup would require on the same mobile device, our algorithm generates the result roughly 2000 times faster. Our approach also represents the space of more than 1700 input simulations, i.e., more than 17GB, with less than 30MB of storage.

The advantages of our approach also become apparent when comparing our method with a direct interpolation of SDF data-sets. Our algorithms requires a single full-resolution SDF, three half resolution deformations, and the neural network weights (which are negligible in practice). While a single SDF requires ca. 2.5 mio. scalar values, all deformations and network weights require ca. 2 mio. scalars in total. Thus our representation encodes the full behavior with less storage than two full SDFs. To illustrate this point, we show the result of a direct SDF interpolation in Fig. 17. Here we sample the parameter space with 8 SDFs in total (at all corners of the 3D parameter space). Hence, this version requires more than 4x the storage our approach requires. Despite the additional memory, the direct interpolations of SDFs lead to very obvious, and undesirable artifacts. The results shown in the lower row of Fig. 17 neither represent the initial drop, nor the resulting splash. Rather, the SDF interpolation leads to strong ghosting artifacts, and an overall loss of detail. Instead of the single drop and splash that our method produces, it leads to four smoothed, and repeated copies.

Drop Waterfall LD-Drop
SDF resolution
Deformation resolution
NN evaluation 69 ms 410 ms 85 ms
Deformation creation 21.5 ms 70 ms 83 ms
Rendering 21 ms 35 ms 30 ms
Framerate 50 fps 30 fps 35 fps
Table 1. Performance details of our app measured on a Samsung S8 device. The ”deformation creation” step contains alignment and rescaling of the deformation fields.

Waterfall

Our second test setup illustrates a different parameter space that captures a variety of obstacle boundary conditions parametrized with . Our first two simulation parameters are the heights of two stair-like steps, while the third parameter is controlling the position of a middle divider obstacle, as illustrated in Fig. 18. The liquid flows in a U-shaped manner around the divider, down the steps. For this setup, we use a higher overall resolution for both space-time SDFs, as well as for the output of the network. Performance details can be found in Table 1.

Fig. 19 depicts still frames captured from our mobile application for this setup. With this setup the user can adjust stair heights and wall width dynamically, while deformations are computed in the background. While this setup has more temporal coherence in its motion than the drop setup, the changing obstacle boundary conditions lead to strongly differing streams over the steps of the obstacle geometry. E.g., changing the position of the divider changes the flow from a narrow, fast stream to a slow, broad front.

Large-dataset drop

As a third example, we demonstrate the gain in quality that can be achieved with additional deformations. While the drop setup above covers the whole basin starting from one of the corners, we now span the same parameter space (drops in different -positions, with different sizes), starting from a data-point in the middle of the parameter space. Thus, two deformations cover the motion along the -direction in this setup, where only a single deformation was available for Fig. 16. As the two deformations lead to different final configurations, the subsequent deformation along Y contains four deformations, while eight are necessary for changing the size. In total, this leads to 14 deformations. Due to the density of the end-point deformations, we do not employ the parameter learning from Sec. 4.1.

Examples that show the resulting deformation quality can be found in Fig. 20. This figure shows frames of two drops of different sizes over time. The additional deformations, in conjunction with a larger resolution of for the initial SDF surface, result in fewer spurious deformations, and an overall gain in smaller surface details.

The mobile application used to generate these examples is available online as ”Neural Liquid Drop” on the Google Play Store [2017], and we encourage readers to try out our demo app.

Setup Res. SDF Defo. Sim. Train
2D setup, Fig. 6 - 186s 40k 10k 0 0
Drop, Fig. 16 8.8m 22m 12k 2k 0 0
Waterfall, Fig. 19 9.7m 186m 9k 1k 0 0
LD-Drop, Fig. 20 8.8m 45m 10k 0
Table 2. Overview of our 2D and 4D simulation and machine learning setups. Timings were measured on a Xeon E5-1630 with 3.7GHz. Res, SDF and Defo denote resolutions for simulation, training, and the NN deformation, respectively; Sim and Train denote simulation and training runtimes. denote training steps for parameters, training steps for deformation, and regularization parameters, respectively.

8. Discussion and Limitations

Our formulation is centered around our loss function which can evaluate network outputs according to the resulting difference to a reference it produces. Using this loss function we were able to combine prescribed deformations such that they generate optimal results across the whole parameter range. Our generative neural network can then capture additional non-linear interactions that the end-point deformations missed.

While we have demonstrated the effectiveness of this approach, it is apparent that the deformations generated by our algorithm can introduce spatial and temporal discontinuities. This is noticeable when deforming the flat surface in Fig. 14 without any regularization. We believe it is important to illustrates this behavior, as shows the unmodified output of our neural network, and additional regularization could be employed to smoothen the generated deformations.

Currently, our training steps also require a large amount of input data. While we believe this is necessary to some extent to specify the target space, we have experimented with using only a smaller, randomly chosen subset of samples for training. Our tests indicate that it is possible to achieve good results with data-sets reduced by a factor of two to four in comparison to our full, densely sampled training sets.

Right now our pipeline is not physics-aware, as it purely considers space-time surfaces, and neglects potential constraints from the governing equations of the physical systems (e.g., conservation of mass). While previous work shares this limitation [Raveendran et al., 2014; Thuerey, 2017], this property also makes these methods more general, as no assumptions need to be made about the inputs. However, it is clearly an interesting topic for future work to incorporate physics constraints.

Lastly, our approach is only applicable in settings where inputs can be grouped around a reasonably representative initial surface . We have demonstrated that we can achieve a significant amount of variance with our examples above. For future extensions, we believe it will be very interesting to automate the process of finding good sub-spaces where our method is applicable from even larger spaces of liquid behavior.

Figure 18. The geometric setup of the three deformations of our waterfall setup from 19 are illustrated in this figure.
Figure 19. These screens illustrate our waterfall setup running in our mobile application. From left to right, the middle divider is pulled back, leading to an increased flow over the step in the back. In the right-most image, the left corner starts to move up, leading to a new stream of liquid pouring down into the outflow region in the right corner of the simulation domain.
Figure 20. Several screenshots of our large-data set drop setup: the top row shows a large drop near the top left corner, while the bottom row shows a smaller drop near the top right corner. Despite the very different look, both animations were generated based on the same initial SDF surface.

9. Conclusions

We have presented a novel method to pre-compute a complex liquid space with deformation-aware neural networks. Our method represents a first approach that actively learns to deform surfaces with convolutional neural networks. Due to the chosen representation, this allows for real-time interactions with complex liquid effects that are otherwise orders of magnitudes too slow for interactive applications.

We believe that it will be exciting to see how generative neural networks can be successfully applied to problems in the context of fluid simulations. In the future, it will also be interesting to simulate other types of phenomena with our synthesis pipeline, and we plan to investigate how our deformation-aware networks perform for optical flow-like problems outside of the area of fluid simulations.

References

  • [1]
  • Bailer et al. [2016] Christian Bailer, Kiran Varanasi, and Didier Stricker. 2016. CNN-based Patch Matching for Optical Flow with Thresholded Hinge Loss. arXiv preprint: 1607.08064 (2016).
  • Barbič and James [2005] Jernej Barbič and Doug L James. 2005. Real-time subspace integration for St. Venant-Kirchhoff deformable models. ACM Trans. Graph. 24, 3 (2005), 982–990.
  • Bell and Bala [2015] Sean Bell and Kavita Bala. 2015. Learning visual similarity for product design with convolutional neural networks. ACM Trans. Graph. 34, 4 (2015), 98.
  • Bishop [2006] Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA.
  • Chentanez and Müller [2011] Nuttapong Chentanez and Matthias Müller. 2011. Real-time Eulerian water simulation using a restricted tall cell grid. ACM Trans. Graph. 30, 4 (2011), 82.
  • Chentanez et al. [2015] Nuttapong Chentanez, Matthias Müller, Miles Macklin, and Tae-Yong Kim. 2015. Fast grid-free surface tracking. ACM Trans. Graph. 34, 4 (2015), 148.
  • Chu and Thuerey [2017] Mengyu Chu and Nils Thuerey. 2017. Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors. ACM Trans. Graph. 36(4), 69 (2017).
  • De Witt et al. [2012] Tyler De Witt, Christian Lessig, and Eugene Fiume. 2012. Fluid simulation using laplacian eigenfunctions. ACM Trans. Graph. 31, 1 (2012), 10.
  • Dosovitskiy et al. [2015] Alexey Dosovitskiy, Philipp Fischery, Eddy Ilg, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, et al. 2015. Flownet: Learning optical flow with convolutional networks. In International Conference on Computer Vision (ICCV). IEEE, 2758–2766.
  • Enright et al. [2003] Doug Enright, Duc Nguyen, Frederic Gibou, and Ron Fedkiw. 2003. Using the Particle Level Set Method and a Second Order Accurate Pressure Boundary Condition for Free-Surface Flows. Proc. of the 4th ASME-JSME Joint Fluids Engineering Conference (2003).
  • Girshick et al. [2014] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proc. Conference on Computer Vision and Pattern Recognition. IEEE, 580–587.
  • Goodfellow et al. [2014] Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. stat 1050 (2014), 10.
  • Gupta and Narasimhan [2007] Mohit Gupta and Srinivasa G Narasimhan. 2007. Legendre fluids: a unified framework for analytic reduced space modeling and rendering of participating media. In Proc. Symposium on Computer Animation. ACM/Eurographics, 17–25.
  • Ihmsen et al. [2012] Markus Ihmsen, Nadir Akinci, Gizem Akinci, and Matthias Teschner. 2012. Unified spray, foam and air bubbles for particle-based fluids. The Visual Computer 28, 6-8 (2012), 669–677.
  • Ihmsen et al. [2014] Markus Ihmsen, Jens Orthmann, Barbara Solenthaler, Andreas Kolb, and Matthias Teschner. 2014. SPH Fluids in Computer Graphics. In State of the Art Reports. Eurographics, 21–42.
  • Iizuka et al. [2016] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2016. Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans. Graph. 35, 4 (2016), 110.
  • Ilg et al. [2016] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. 2016. FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. arXiv preprint: 1612.01925 (2016).
  • James et al. [2006] Doug L James, Jernej Barbič, and Dinesh K Pai. 2006. Precomputed acoustic transfer: output-sensitive, accurate sound generation for geometrically complex vibration sources. ACM Trans. Graph. 25, 3 (2006), 987–995.
  • James and Fatahalian [2003] Doug L James and Kayvon Fatahalian. 2003. Precomputing interactive dynamic deformable scenes. Proc. SIGGRAPH 22, 3 (2003).
  • Jones et al. [2016] Aaron Demby Jones, Pradeep Sen, and Theodore Kim. 2016. Compressing fluid subspaces. In Proc. Symposium on Computer Animation. ACM/Eurographics, 77–84.
  • Kass and Miller [1990] M. Kass and G. Miller. 1990. Rapid, Stable Fluid Dynamics for Computer Graphics. ACM Trans. Graph. 24, 4 (1990), 49–55.
  • Kim et al. [2013] Doyub Kim, Woojong Koh, Rahul Narain, Kayvon Fatahalian, Adrien Treuille, and James F O’Brien. 2013. Near-exhaustive precomputation of secondary cloth effects. ACM Trans. Graph. 32, 4 (2013), 87.
  • Kim and Delaney [2013] Theodore Kim and John Delaney. 2013. Subspace Fluid Re-simulation. ACM Trans. Graph. 32, 4, Article 62 (July 2013), 9 pages.
  • Kim and James [2012] Theodore Kim and Doug L James. 2012. Physics-based character skinning using multidomain subspace deformations. IEEE Trans. Vis. Comp. Grap. 18, 8 (2012), 1228–1240.
  • Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. NIPS, 1097–1105.
  • Ladicky et al. [2015] Lubor Ladicky, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, and Markus Gross. 2015. Data-driven fluid simulations using regression forests. ACM Trans. Graph. 34, 6 (2015), 199.
  • Lentine et al. [2011] Michael Lentine, Mridul Aanjaneya, and Ronald Fedkiw. 2011. Mass and momentum conservation for fluid simulation. In Symposium on Computer Animation. ACM, 91–100.
  • Macklin and Müller [2013] Miles Macklin and Matthias Müller. 2013. Position based fluids. ACM Trans. Graph. 32, 4 (2013), 104.
  • Macklin et al. [2014] Miles Macklin, Matthias Müller, Nuttapong Chentanez, and Tae-Yong Kim. 2014. Unified particle physics for real-time applications. ACM Trans. Graph. 33, 4 (2014), 153.
  • Manteaux et al. [2015] Pierre-Luc Manteaux, Wei-Lun Sun, François Faure, Marie-Paule Cani, and James F O’Brien. 2015. Interactive detailed cutting of thin sheets. In Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games. ACM, 125–132.
  • McNamara et al. [2004] Antoine McNamara, Adrien Treuille, Zoran Popović, and Jos Stam. 2004. Fluid Control Using the Adjoint Method. ACM Trans. Graph. 23, 3 (2004), 449–456.
  • Pentland and Williams [1989] Alex Pentland and John Williams. 1989. Good vibrations: Modal dynamics for graphics and animation. Proc. SIGGRAPH 23, 3 (1989).
  • Prantl et al. [2017] Lukas Prantl, Boris Bonev, and Nils Thuerey. 2017. Neural Liquid Drop. (2017). https://play.google.com/store/apps/details?id=fluidsim.de.interactivedrop.
  • Ranjan and Black [2016] Anurag Ranjan and Michael J. Black. 2016. Optical Flow Estimation using a Spatial Pyramid Network. CoRR abs/1611.00850 (2016). http://arxiv.org/abs/1611.00850
  • Raveendran et al. [2014] Karthik Raveendran, Nils Thuerey, Chris Wojtan, and Greg Turk. 2014. Blending Liquids. ACM Trans. Graph. 33 (4) (August 2014), 10.
  • Rumelhart et al. [1988] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by back-propagating errors. Cognitive modeling 5, 3 (1988), 1.
  • Sangkloy et al. [2016] Patsorn Sangkloy, Nathan Burnell, Cusuh Ham, and James Hays. 2016. The sketchy database: learning to retrieve badly drawn bunnies. ACM Trans. Graph. 35, 4 (2016), 119.
  • Solenthaler and Pajarola [2009] Barbara Solenthaler and Renato Pajarola. 2009. Predictive-corrective incompressible SPH. ACM Trans. Graph. 28, 3 (2009), 40.
  • Solomon et al. [2015] Justin Solomon, Fernando De Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. 2015. Convolutional wasserstein distances: Efficient optimal transportation on geometric domains. ACM Trans. Graph. 34, 4 (2015), 66.
  • Stam [1999] Jos Stam. 1999. Stable Fluids. In Proc. ACM SIGGRAPH. ACM, 121–128.
  • Stanton et al. [2014] Matt Stanton, Ben Humberston, Brandon Kase, James O’Brien, Kayvon Fatahalian, and Adrien Treuille. 2014. Self-Refining Games using Player Analytics. ACM Trans. Graph. 33 (4) (2014), 9.
  • Stava et al. [2008] Ondřej Stava, Bedřich Beneš, Matthew Brisbin, and Jaroslav Křivánek. 2008. Interactive terrain modeling using hydraulic erosion. In Proc. Symposium on Computer Animation. ACM/Eurographics, 201–210.
  • Teng et al. [2016] Yun Teng, David IW Levin, and Theodore Kim. 2016. Eulerian solid-fluid coupling. ACM Trans. Graph. 35, 6 (2016), 200.
  • Teng et al. [2015] Yun Teng, Mark Meyer, Tony DeRose, and Theodore Kim. 2015. Subspace condensation: full space adaptivity for subspace deformations. ACM Trans. Graph. 34, 4 (2015), 76.
  • Thuerey [2017] Nils Thuerey. 2017. Interpolations of Smoke and Liquid Simulations. ACM Trans. Graph. 36(1) (July 2017), 15.
  • Tompson et al. [2016] Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, and Ken Perlin. 2016. Accelerating Eulerian Fluid Simulation With Convolutional Networks. arXiv preprint: 1607.03597 (2016).
  • Treuille et al. [2006] Adrien Treuille, Andrew Lewis, and Zoran Popović. 2006. Model reduction for real-time fluids. ACM Trans. Graph. 25, 3 (July 2006), 826–834.
  • Um et al. [2017] Kiwon Um, Xiangyu Hu, and Nils Thuerey. 2017. Splash Modeling with Neural Networks. arXiv preprint (2017).
  • Wang et al. [2007] Huamin Wang, Gavin Miller, and Greg Turk. 2007. Solving general shallow wave equations on surfaces. In Proc. Symposium on Computer Animation. ACM/Eurographics, 229–238.
  • Wei et al. [2015] Lingyu Wei, Qixing Huang, Duygu Ceylan, Etienne Vouga, and Hao Li. 2015. Dense human body correspondences using convolutional networks. arXiv preprint: 1511.05904 (2015).
  • Werbos [1974] P. J. Werbos. 1974. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Dissertation. Harvard University. Department of Applied Mathematics.
  • Wicke et al. [2009] Martin Wicke, Matthew Stanton, and Adrien Treuille. 2009. Modular Bases for Fluid Dynamics. ACM Trans. Graph. 28, 3 (Aug. 2009), 39.
  • Xu and Barbič [2016] Hongyi Xu and Jernej Barbič. 2016. Pose-space subspace dynamics. ACM Trans. Graph. 35, 4 (2016), 35.
  • Yang et al. [2016] Cheng Yang, Xubo Yang, and Xiangyun Xiao. 2016. Data-driven projection method in fluid simulation. Computer Animation and Virtual Worlds 27, 3-4 (2016), 415–424.
  • Zhu and Bridson [2005] Yongning Zhu and Robert Bridson. 2005. Animating Sand as a Fluid. ACM Trans. Graph. 24, 3 (2005), 965–972.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
44776
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description