In case anyone is wondering, the end-to-end fluid dynamics simulation is not achieved by a neural network alone here (though the authors claim that is possible).
If I understand the paper [1] correctly, they used Convolution Neural Network (ConvNet) to speed up a Navier-Stokes Partial Differential Equation (PDE) solver. Specifically, they are solving "incompressible flow" Navier-Stokes equations.
In each time step, the solver performs an "advection" update followed by a "pressure projection" step. This "pressure projection" step involves solving a "Poisson equation," which is computationally intensive and usually requires an iterative solver. Instead of directly solving the Poisson equation, they train a ConvNet regression model to infer these "pressure projections."
They have to deal with lots of fiddly details, like choice of neural network training penalty function, dealing with PDE boundary conditions issues, and dealing with errors accumulating during simulation.
Training the ConvNet involves running an off-line solver to obtain statistical fluid dynamics data. The training step takes 48 hours, using GPU ConvNet training. As far as I can tell, the network is trained on a specific geometry and must be re-trained if the geometry changes. [2]
The final run-time improvement of their model is "competitive with state-of-the-art GPU implementations of Jacobi and Gauss Seidel solvers for pressure." They believe that run-time could be further improved.
[2] "When simulating smoke, we do not explicitly satisfy the Dirichlet boundary conditions in our offline simulation (for smoke we surround our simulation domain by an empty air region). Typically “ghost cells” are used to incorporate these boundary conditions into the linear system solution. In our system, we do not include ghost cells (or even mark voxels as boundary cells) and instead let the ConvNet learn to apply these boundary conditions by learning edge effects in the functional mapping of the input tensor to pressure output. This means that we must learn a new model if the external boundary changes. This is certainly a limitation of our approach, however it is one that we feel is not severe since most applications use a constant domain boundary."
> In each time step, the solver performs an "advection" update followed by a "pressure projection" step. This "pressure projection" step involves solving a "Poisson equation," which is computationally intensive and usually requires an iterative solver. Instead of directly solving the Poisson equation, they train a ConvNet regression model to infer these "pressure projections."
Yea, I mean, for applications like solving a Poisson equation, it seems like it would be cheaper to just solve the damn linear system. The system is going to be fairly sparse with a nice, well understood distribution of non-zeros and one should be able to make the linear solve fly. I'm curious where the tipping point is, however, between just solving the linear system and pushing everything through a CNN.
Moreover Poisson equation, being a Linear PDE, is essentially solved using convolutions.
The crux of the problem seems to be in the handling of boundaries - once FEM-ized the nice grid like structure gets replaced by an irregular graph (with similar topological dimension). Can someone please shed some light on this ?
> As far as I can tell, the network is trained on a specific geometry and must be re-trained if the geometry changes.
The model is trained on a "training set" of various 3D models, and tested on a "test set" of different 3D models. The network can generalize to other 3D models and does not need retraining (see e.g. last part of abstract).
I think what they mean is that they would need to retrain the model if the boundary conditions at the outer boundary of the "studio" (the simulation domain) changed.
"We believe that run-time performance can be improved significantly" is an understatement. Jacobi and Gauss Seidel have very poor convergence property on Poisson-like problem [0]. A state-of-the-art Poisson solver is likely to use a Krylov-space method [1], preconditioned with a multi-grid [2] or some sort of coarse preconditioner. This of course does not make such proof of principle less valuable.
My reading of [2] is that the model only needs to be retrained if the environmental boundary changes. For example, to handle objects that do not fit within the current boundary. Since I imagine scale matters here, and you couldn't simply make everything inside smaller. Internal geometry such as replacing the castle with a car does not appear to require retraining.
TBH, I doubt it. There's lots of nifty tricks that can speed up CFD for visualization (like this) by many orders of magnitude. Yet no engineers are willing to touch any of those tricks, because we are making physical machines that can kill people if something like the pressure spikes are off by 10%.
Can anyone upload pretrained models for this stuff? I want to mess with it, but don't want to waste a week of GPU time if someone else is already on the case...
"Computational Fluid Dynamics" by Ferziger and Peric. I'm in an area related to hydrodynamics, and while I don't do CFD myself, this book was easy to read and covers a lot of the basics, I think.
And it's extremely well written.
(Ferziger also wrote a great book on kinetic theory of rarefied gases)
If I understand the paper [1] correctly, they used Convolution Neural Network (ConvNet) to speed up a Navier-Stokes Partial Differential Equation (PDE) solver. Specifically, they are solving "incompressible flow" Navier-Stokes equations.
In each time step, the solver performs an "advection" update followed by a "pressure projection" step. This "pressure projection" step involves solving a "Poisson equation," which is computationally intensive and usually requires an iterative solver. Instead of directly solving the Poisson equation, they train a ConvNet regression model to infer these "pressure projections."
They have to deal with lots of fiddly details, like choice of neural network training penalty function, dealing with PDE boundary conditions issues, and dealing with errors accumulating during simulation.
Training the ConvNet involves running an off-line solver to obtain statistical fluid dynamics data. The training step takes 48 hours, using GPU ConvNet training. As far as I can tell, the network is trained on a specific geometry and must be re-trained if the geometry changes. [2]
The final run-time improvement of their model is "competitive with state-of-the-art GPU implementations of Jacobi and Gauss Seidel solvers for pressure." They believe that run-time could be further improved.
[1] https://arxiv.org/pdf/1607.03597v3.pdf
[2] "When simulating smoke, we do not explicitly satisfy the Dirichlet boundary conditions in our offline simulation (for smoke we surround our simulation domain by an empty air region). Typically “ghost cells” are used to incorporate these boundary conditions into the linear system solution. In our system, we do not include ghost cells (or even mark voxels as boundary cells) and instead let the ConvNet learn to apply these boundary conditions by learning edge effects in the functional mapping of the input tensor to pressure output. This means that we must learn a new model if the external boundary changes. This is certainly a limitation of our approach, however it is one that we feel is not severe since most applications use a constant domain boundary."