Turbulence Enrichment with Physics-informed Generative Adversarial NetworkCode for our model is available at https://github.com/akshaysubr/TEGAN

Turbulence Enrichment with Physics-informed Generative Adversarial Network1

Abstract

Generative Adversarial Networks (GANs) have been widely used for generating photo-realistic images [13, 26, 14]. A variant of GANs called super-resolution GAN (SRGAN) [21] has already been used successfully for image super-resolution where low resolution images can be upsampled to a larger image that is perceptually more realistic. However, when such generative models are used for data describing physical processes, there are additional known constraints that models must satisfy including governing equations and boundary conditions. In general, these constraints may not be obeyed by the generated data. In this work, we develop physics-based methods for generative enrichment of turbulence. We incorporate a physics-informed learning approach by a modification to the loss function to minimize the residuals of the governing equations for the generated data. We have analyzed two trained physics-informed models: a supervised model based on convolutional neural networks (CNN) and a generative model based on SRGAN: Turbulence Enrichment GAN (TEGAN), and show that they both outperform simple bicubic interpolation in turbulence enrichment. We have also shown that using the physics-informed learning can also significantly improve the model’s ability in generating data that satisfies the physical governing equations. Finally, we compare the enriched data from TEGAN to show that it is able to recover statistical metrics of the flow field including energy metrics and well as inter-scale energy dynamics and flow morphology.

1 Introduction

Predicting turbulence accurately is extremely challenging especially in capturing high-order statistics due to its intermittent nature. Although it is well-known that turbulence can be described very precisely with the three-dimensional (3D) Navier–Stokes equations, so far no known general analytical solution exists for the equations. While numerical methods provide another path for attainable approximate solutions, the computational resources required for direct numerical simulations (DNSs) with Navier–Stokes equations for engineering applications can be quite large, especially when the flows are highly turbulent, or the Reynolds numbers are high, where a broad range of spatial and temporal scales appears. As a result, reduced-order modeling of turbulent flows for low cost computations has been a popular research area in fluid dynamics for decades.

Traditional turbulence modeling requires the mathematical modeling of Reynolds stress tensor in Reynolds-averaged Navier–Stokes (RANS) equations for RANS simulations, or subgrid/subfilter scale effects in filtered Navier–Stokes equations for large-eddy simulations (LESs). Although high-fidelity simulation and experimental data are often used to validate these models, it is still not guaranteed that they are in the best functional forms since they are usually introduced based on only physical intuitions and heuristic arguments. Machine learning (ML), on the other hand, may provide an alternative efficient strategy towards constructing an optimal mapping between available turbulence data and statistical quantities of interest. Despite the pioneering work on using neural networks to reconstruct turbulent fields near wall by Milano and Koumoutsakos [24], the use of ML for turbulence modeling has not received much attention until recent years [8, 2] after successful applications of deep neural networks (DNNs) in different fields, such as health care, computer vision, speech recognition, sales forecasting, etc, are seen. Zhang and Duraisamy [31] investigated the use of neural networks to reconstruct the spatial form of correction factors for the turbulent and transitional flows. Ling et al. [23] developed a model for the Reynolds stress anisotropy tensor using DNN with embedded Galiliean invariance and the accuracy of their DNN over two conventional RANS models was demonstrated by Kutz [18]. A synthetic turbulence generator for inflow using a convolutional neural network (CNN) is developed by Fukami et al. [10]. Lapeyre et al. [20] demonstrated an approach with CNN to model subgrid scale reaction rates.

Traditional LES approach that only models the subgrid scale effects on the filtered velocity field may not be sufficient for applications that need more refined data that is not available on the LES grid. For example, in particle-laden flow LESs, the full-scale carrier-phase velocity is required to accurately describe the transport of particles inside the flows [1]. In LESs of wind farm flows, the prediction of fatigue loading in wind turbines that operate within the planetary boundary layer needs more refined representation of the subgrid velocity fluctuations that is statistically accurate [11]. Recently, Fukami et al. [9] analyzed the use of ML models including deep CNN for super-resolution of chaotic flows. While their methods may serve the purpose of turbulence enrichment, they only showed capabilities of their models for two-dimensional (2D) “turbulent” flows. Since the dynamics of turbulence is greatly affected by 3D vortex-stretching effect, further investigation is necessary to determine whether the state-of-the-art DNNs can be used for turbulence enrichment under physical constraints.

The CNN used in the work [9] is based on the super-resolution CNN (SRCNN) by Dong et al. [6] which was developed mainly for the high resolution reconstruction of low resolution images using deep CNN. This seminal work shows superior accuracy of the proposed model compared with other state-of-the-art image reconstruction methods. Since then, there have been many works [7, 22, 30, 19] on improving DNNs for super-resolution of images. However, those models still cannot handle large up-scaling factor and generate images with satisfying high-wavenumber details. In the work by Ledig et al. [21], the aforementioned problems were tackled with the use of a generative adversarial network (GAN) for super-resolution of images. The idea of GAN, which is composed of two neural networks (generator and discriminator) that contest with each other, was invented by Goodfellow et al. [13]. In a GAN, the generator creates outputs while the discriminator evaluates them. GANs have been shown to perform better than other data driven approaches like PCA in capturing high-order moments [4], and thus may be beneficial for turbulence modeling. While one may attempt to directly extend the super-resolution GAN (SRGAN) [21] for turbulence enrichment of 3D low resolution turbulent data, the generated high resolution data may not be physically realistic as physical constraints are not present when these DNNs are designed for image upsampling. Incorporating the physical constraints into DNNs such as the SRGAN framework would be crucial to its performance in this context.

In this work, we propose the extensions of the CNN-based residual block neural network [16] (TEResNet) and the SRGAN [21] (TEGAN) to enrich low resolution turbulent fields with high wavenumber contents using physics-informed technique. The input to our model is a low resolution turbulent flow field that consists of four 3D fields each of size . We then apply both TEResNet and TEGAN to upsample each of these four fields to . To our knowledge physics-informed CNNs or GANs for turbulence enrichment have not been studied previously. In the works [27, 28], physics-based neural networks were developed to infer solutions to a partial differential equation (PDE). To achieve this they used a mean squared error loss function with contributions from not only the error in the solution, but also from the residual of the governing PDE. This physics-informed technique was implemented and tested on a one-dimensional (1D) model problem, but may not be a feasible approach for solving large 3D chaotic systems like turbulent flows. Our results show that the proposed TEResNet and TEGAN models have better performance than only using tricubic interpolation for upsampling and the turbulence enriched data compares well with the high resolution data.

2 Problem description and governing equations

The demonstration problem that we have chosen is the incompressible forced isotropic homogeneous turbulence problem in a 3D (-- or --) periodic domain with size . The governing equations are the time () dependent incompressible Navier–Stokes equations:

(1)
(2)

where , , and are the velocity vector, kinematic pressure, and kinematic shear viscosity, respectively. Equation (1) is the continuity equation derived from the conservative of mass and Equation (2) is the conservation of momentum. Taking the divergence of the momentum equation and using the continuity equation, we can get a Poisson equation for pressure:

(3)

The continuity equation and the pressure Possion equation can be viewed as physical constraints on the velocity and pressure fields respectively.

3 Dataset and features

3.1 Data generation

Data is generated using an in-house computational fluid dynamics (CFD) simulation code: PadeOps. Spectral methods are used for the spatial discretizations and a third-order total variation diminishing Runge–Kutta (RK-TVD) scheme [29] is adopted for time integration. We perform a time-evolving DNS on a uniform grid and collect snapshots separated in time by more than one integral time scale. This ensures that each example is statistically decorrelated. Each snapshot is comprised of four fields - three components of the velocity vector and the kinematic pressure, , each of size . Low resolution data is then generated by filtering the high resolution data down to using a compact support explicit filter discussed in Appendix A. The filter is derived as an approximation to the sharp spectral low-pass filter at cutoff of a quarter of Nyquist wavenumber with a compact stencil. The velocity components of the high resolution data are normalized (and non-dimensionalized) by the root mean square of the velocity magnitude and the pressure by the mean square of the velocity magnitude. The dataset is divided as Train/Dev/Test split: . Sample images of the high and low resolution data are presented in Section 5.

4 Methods

4.1 Architecture of different neural network models

For the task of upsampling the low resolution data in a physically consistent manner, we use a GAN[13] in a fashion similar to super-resolution applications for image data [21].

The generator has a deep residual network architecture with each residual block having convolutional layers with batch normalization. The discriminator has a deep convolutional architecture with fully connected layers in the end for binary classification. The architectures of the generator and discriminator are depicted pictorially in Figure 1.

Figure 1: Architecture of generator and discriminator network with corresponding kernel size (k), number of channels (n) and stride (s) indicated for each convolutional layer.

4.2 Loss functions

As discussed in a previous section, the flow field is constrained by the continuity and pressure Poisson equations which are repeated below:

The above equations might not be satisfied exactly by the model’s generated output. To counter this, the residual of the above equations can be used as a regularizer for the model through a physics loss.

The loss function minimized for the generator network during training is a combination of a content loss and a physics loss .

(4)
(5)
(6)
(7)
Figure 2: Explanation of the role of physics loss.
  • Content loss:

    : Mean squared error between the high resolution and generated fields.

    : Mean squared error in the derived enstrophy field () to sensitize the generator to high frequency content.

  • Physics loss: Residuals of the continuity () and pressure Poisson () equations given above similar to [27]. As depicted in figure above, the inclusion of physics loss forces the our NN to generate on physically realizable solutions.

  • Adversarial loss: a logistic loss function is used similar to that defined in [21].

To train the discriminator, we use the logistic loss based on predicted labels for real and generated data.

4.3 Training

TEResNet

A model with just the residual generator network without the adversarial component is termed TEResNet. We first train TEResNet to convergence and tune hyperparameters like the number of residual blocks and the physics loss parameters.

Tegan

The model with both the residual network generator and the discriminator depicted above is termed TEGAN. The generator in TEGAN is first initialized using the weights from the trained TEResNet while the discriminator is initialized using the Xavier–He initialization [12, 15].

For the first few iterations in the training process (), the discriminator alone is trained to negate the advantage that the generator has because of its pre-trained weights. Then, both the generator and discriminator are trained together with the active adversarial loss until the losses saturate and the discriminator’s output saturates at 0.5. The Adam optimizer [17] is used for updating the weights and training both the networks.

5 Results

5.1 Training convergence

A batch size of 5 was chosen for training of both TEResNet and TEGAN because of memory constraints of the GPU used for training. We choose as the learning rate for training from the hyper-parameter search. We also examine the effect of the physics loss weight through experiments on TEResNet. Figure 3 shows the content and physics losses during training of TEResNet with different values for .We see that adding a non-zero weight to the physics loss improves the physics residual by almost an order of magnitude. We choose as this gives good compromise between the two losses. Another interesting observation is that for higher weightage to the physics loss, the trivial solution of zero fields becomes a local minimum.

Figure 3: Losses against iterations. Left: content loss; right: physics loss.

To train the TEGAN model, we initialize its weights using the trained TEResNet for the generator. We then train only the discriminator until the discriminator is able to distinguish between real and generated data. Then we train the discriminator and generator together training the discriminator twice as often as the generator. To improve the stability for training TEGAN, we add a staircase decay for the learning rate. We set the decay rate to and chose a decay step of by running a case without learning rate decay and estimating the number of steps required to go close to the minimum. Figure 4 shows the convergence of TEGAN during training. It can be seen that the discriminator output for generated data saturates at 0.5 and the physics loss converges to a smaller value compared to the initial value from TEResNet.

Figure 4: Left: discriminator output for generated data; right: physics loss of TEGAN.

5.2 Model evaluation

Figure 5 compares the qualities of upsampled data from tricubic interpolation, TEResNet and TEGAN. Both TEResNet and TEGAN outperform the tricubic interpolation in reconstructing small-scale features. This is also evident from the plots of the velocity energy spectra in Figure 6. The output from TEResNet and TEGAN are indistinguishable visually but Table 1 shows that there is more than improvement of TEGAN over TEResNet in minimizing the physics loss while the content losses of both models are similar.

(a)
(b)
(c)
(d)
Figure 5: Comparisons of the (from left to right) low resolution, tricubic interpolation, TEResNet TEGAN and high resolution fields. Plots are of the , , and (from top to bottom) on a slice of the 3D field.
Figure 6: Comparison of the velocity energy spectra for the different upsampling methods.
Dev Test Dev Test
TEResNet 0.049 0.050 0.078 0.085
TEGAN 0.047 0.047 0.070 0.072
% Difference 4.1 6.0 10.3 15.2
Table 1: Comparison of losses between TEResNet and TEGAN at the end of training.

5.3 Higher order correlations

The fundamental statistical theory of homogeneous turbulence arises from the analysis of second order velocity structure function given by

(8)

where is the separation vector, is the velocity in the direction and denotes statistical averaging. In homogeneous isotropic turbulence, the second order velocity structure function depends only on two second order tensors as

(9)

where and are the longitudinal and transverse structure functions respectively. Further, in homogeneous isotropic turbulence, the second order structure function can be related to the two-point correlation tensor as

(10)

Therefore, the second order structure functions in homogeneous isotropic turbulence are fully characterized by the longitudinal and transverse two-point correlation functions and respectively [25]. Here, represents the unit vector along the direction.

(a)
(b)
Figure 7: Longitudinal (left) and transverse (right) two-point correlations compared from the low resolution data (red circles), high resolution data (black squares), bicubic interpolation (green diamonds) and TEGAN (blue triangles).

Figure 7 shows the longitudinal and transverse two-point correlation functions for the low and high resolution datasets as well as for bicubic interpolation and the data generated by the TEGAN model. Since the two-point correlation at zero separation () is the Turbulent Kinetic Energy (TKE), the low resolution has a much lower value than the high resolution data at . This is to be expected since the finer energy containing scales are filtered out in the low resolution data. Bicubic interpolation basically acts to interpolate the low resolution two point correlations and completely misrepresents the true correlations. The TEGAN model however is able to faithfully recover both longitudinal and transverse two-point correlation functions and is very close to the true high resolution values. This shows that the TEGAN model recovers the energy in the fine scales as well as accurately represents the statistical properties of the turbulent field. Since the two-point correlation functions are recovered accurately, this also means that derived statistical quantities like the integral length scale are also accurately represented.

(a)
(b)
Figure 8: Longitudinal (left) and transverse (right) two-point triple velocity correlations compared from the low resolution data (red circles), high resolution data (black squares), bicubic interpolation (green diamonds) and TEGAN (blue triangles).

Two-point correlations represent the energy in a field at different length scales. However, turbulence is a dynamic and non-linear phenomenon and many of the complexities of turbulent flows arise from the interaction between different length scales. The Karman–Howarth equation describes these energetic interactions across length scales in a statistical sense and is given by

(11)

where is the RMS velocity fluctuation, is the longitudinal two-point correlation function and is the longitudinal two-point triple velocity correlation function. Thus, transfer of energy from large to small scales of turbulence is represented by the term in the Karman-Howarth equation above and signifies the importance of the longitudinal two-point triple velocity correlation function in the dynamic energy cascade process.

Figure 8 shows the longitudinal and transverse two-point triple velocity correlation functions and respectively. Here again, simple bicubic interpolation only acts to interpolate the low resolution triple correlations in and not recover the true high resolution data. The TEGAN model is able to represent the true triple velocity correlations much better and is with a maximum error being a 15% overprediction of the longitudinal triple correlations at . Third order statistics are typically much harder to reproduce than second order statistics for many enrichment methods and the ability of a generative model to represent these is significant to many applications sensitive to energy transfer mechanisms.

(a)
(b)
(c)
(d)
Figure 9: diagrams for the high resolution data (top left), bicubic interpolation (top right), TEGAN (bottom left) and contours comparing the three methods (bottom right). The grey dashed lines represent the restricted Euler solution to the and invariants of the velocity gradient tensor [3].

The diagram was introduced by Chong et al. [5] and Cantwell [3]. It describes the PDF of the invariants of the velocity gradient tensor and thereby serves as a comparison of the flow morphology between the data generated by TEGAN, the high resolution data and bicubic interpolation. Figure 9 plots the diagrams for the three methods and a comparison between them. The restricted Euler solution (plotted in grey dashed lines) predicts that the flow evolves to the bottom right of the diagram. We see that to be the case in the high resolution data. The structure of the diagram from the TEGAN data is also very similar to the high resolution data with some differences in the bottom right of the plot that indicates differences in the very fine scales. Bicubic interpolation, as one would expect, cannot recover the right structure of the diagram and is very different from the high resolution data.

6 Conclusion/Future Work

In this work, we present two models, TEGAN and TEResNet, to enrich low resolution turbulence data and try to recover DNS level quality by adding fine scales features. We show the effect of different losses and the impact of physics-based losses for improved performance of the networks. These physics-based losses essentially act as a regularizer that brings the model onto a manifold of those that are physically realizable. While both TEResNet and TEGAN outperform traditional bicubic interpolation, TEGAN captures the physics better than TEResNet and has better generalization to the test dataset.

The enriched data from TEGAN was then compared to the high resolution DNS data and contrasted against data from simple bicubic interpolation. TEGAN is able to enrich the low resolution fields and recover a large portion of the missing finer scales. This is shown through a comparison of the energy spectrum and of longitudinal and transverse two-point correlation functions. TEGAN is also able to represent the energy dynamics across scales as represented by the third order structure functions. Finally, we compare the diagrams from the different method and show that TEGAN is able to represent the flow morphology well.

The current TEGAN architecture and training can be further improved in the future by the implementation of gradient penalty based Wasserstein GAN (WGAN-GP) [14] for overcoming few of the shortcomings of the traditional GANs mainly related to stabilized learning. Also, using wider distributions of data and tasking the discriminator with physics-based classification along with discrimination can be explored for better performance of the TEGAN in the future.

Acknowledgements

We gratefully acknowledge the help from Dr. Aditya S. Ghate in setting up the forced isotropic turbulence simulation problem in the PadeOps2 code used to generate the simulation data.

Appendix A Low-pass filter

An explicit low-pass filter with compact support is applied to the high resolution data before it is downsampled to low resolution data for training. The formula for filtering a variable at grid point , where is the grid spacing, on a uniform grid is given by:

(12)

where is the filtered variable. The transfer function of the low-pass filter is shown in Figure 10 and is compared with a spectral filter.

Figure 10: Transfer function of the explicit filter.

Footnotes

  1. thanks: Code for our model is available at https://github.com/akshaysubr/TEGAN
  2. https://github.com/FPAL-Stanford-University/PadeOps
  3. thanks: Code for our model is available at https://github.com/akshaysubr/TEGAN
  4. thanks: Code for our model is available at https://github.com/akshaysubr/TEGAN
  5. thanks: Code for our model is available at https://github.com/akshaysubr/TEGAN
  6. thanks: Code for our model is available at https://github.com/akshaysubr/TEGAN

References

  1. M. Bassenne, M. Esmaily, D. Livescu, P. Moin and J. Urzay (2019) A dynamic spectrally enriched subgrid-scale model for preferential concentration in particle-laden turbulence. International Journal of Multiphase Flow 116, pp. 270–280. Cited by: §1.
  2. S. L. Brunton, B. R. Noack and P. Koumoutsakos (2019) Machine learning for fluid mechanics. Annual Review of Fluid Mechanics 52. Cited by: §1.
  3. B. J. Cantwell (1993) On the behavior of velocity gradient tensor invariants in direct numerical simulations of turbulence. Physics of Fluids A: Fluid Dynamics 5 (8), pp. 2008–2013. Cited by: Figure 9, §5.3.
  4. S. Chan and A. H. Elsheikh (2017) Parametrization and generation of geological models with generative adversarial networks. arXiv preprint arXiv:1708.01810. Cited by: §1.
  5. M. S. Chong, A. E. Perry and B. J. Cantwell (1990) A general classification of three-dimensional flow fields. Physics of Fluids A: Fluid Dynamics 2 (5), pp. 765–777. Cited by: §5.3.
  6. C. Dong, C. C. Loy, K. He and X. Tang (2015) Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38 (2), pp. 295–307. Cited by: §1.
  7. C. Dong, C. C. Loy and X. Tang (2016) Accelerating the super-resolution convolutional neural network. In European conference on computer vision, pp. 391–407. Cited by: §1.
  8. K. Duraisamy, G. Iaccarino and H. Xiao (2019) Turbulence modeling in the age of data. Annual Review of Fluid Mechanics 51, pp. 357–377. Cited by: §1.
  9. K. Fukami, K. Fukagata and K. Taira (2019) Super-resolution reconstruction of turbulent flows with machine learning. Journal of Fluid Mechanics 870, pp. 106–120. Cited by: §1, §1.
  10. K. Fukami, Y. Nabae, K. Kawai and K. Fukagata (2019) Synthetic turbulent inflow generator using machine learning. Physical Review Fluids 4 (6), pp. 064603. Cited by: §1.
  11. A. S. Ghate and S. K. Lele (2017) Subfilter-scale enrichment of planetary boundary layer large eddy simulation using discrete fourier–gabor modes. Journal of Fluid Mechanics 819, pp. 494–539. Cited by: §1.
  12. X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. Cited by: §4.3.2.
  13. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: Turbulence Enrichment with Physics-informed Generative Adversarial Network3, §1, §4.1.
  14. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and A. C. Courville (2017) Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5769–5779. Cited by: Turbulence Enrichment with Physics-informed Generative Adversarial Network4, §6.
  15. K. He, X. Zhang, S. Ren and J. Sun (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §4.3.2.
  16. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1.
  17. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.3.2.
  18. J. N. Kutz (2017) Deep learning in fluid dynamics. Journal of Fluid Mechanics 814, pp. 1–4. Cited by: §1.
  19. W. Lai, J. Huang, N. Ahuja and M. Yang (2017) Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 624–632. Cited by: §1.
  20. C. J. Lapeyre, A. Misdariis, N. Cazard, D. Veynante and T. Poinsot (2019) Training convolutional neural networks to estimate turbulent sub-grid scale reaction rates. Combustion and Flame 203, pp. 255–264. Cited by: §1.
  21. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz and Z. Wang (2016) Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint. Cited by: Turbulence Enrichment with Physics-informed Generative Adversarial Network5, §1, §1, 3rd item, §4.1.
  22. B. Lim, S. Son, H. Kim, S. Nah and K. Mu Lee (2017) Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 136–144. Cited by: §1.
  23. J. Ling, A. Kurzawski and J. Templeton (2016) Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics 807, pp. 155–166. Cited by: §1.
  24. M. Milano and P. Koumoutsakos (2002) Neural network modeling for near wall turbulent flow. Journal of Computational Physics 182 (1), pp. 1–26. Cited by: §1.
  25. A. Monin and A. M. Yaglom (1971) Statistical fluid mechanics, vols. 1 and 2. MIT Press, Cambridge, MA 1975, pp. 11. Cited by: §5.3.
  26. A. Radford, L. Metz and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: Turbulence Enrichment with Physics-informed Generative Adversarial Network6.
  27. M. Raissi, P. Perdikaris and G. E. Karniadakis (2017) Physics informed deep learning (part i): data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561. Cited by: §1, 2nd item.
  28. M. Raissi, P. Perdikaris and G. E. Karniadakis (2017) Physics informed deep learning (part ii): data-driven discovery of nonlinear partial differential equations. arXiv preprint arXiv:1711.10566. Cited by: §1.
  29. C. Shu and S. Osher (1989) Efficient implementation of essentially non-oscillatory shock-capturing schemes, ii. In Upwind and High-Resolution Schemes, pp. 328–374. Cited by: §3.1.
  30. Y. Tai, J. Yang and X. Liu (2017) Image super-resolution via deep recursive residual network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3147–3155. Cited by: §1.
  31. Z. J. Zhang and K. Duraisamy (2015) Machine learning methods for data-driven turbulence modeling. In 22nd AIAA Computational Fluid Dynamics Conference, pp. 2460. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
410689
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description