Singed Silhouettes and Feed Forward Flames: Volumetric Neural Style Transfer for Expressive Fire Simulation

Paul Kanyuk, Vinicius C. Azevedo, Raphael Ortiz, Jingwei Tang

Abstract:

While controlling simulated gaseous volumes remains an ongoing battle when seeking realism in computer graphics, creating appealing characters entirely out of these simulations brought this challenge to an entirely new level in Pixar's film Elemental. For fire characters, like the protagonist Ember, their faces and bodies needed to look and move like real fire, but not be so frenetic as to distract from the acting and emotion of their performances. Neural Style Transfer emerged as a key technique to achieving a look that met these criteria. By using more gaseous and languid pyro simulations as input, higher frequency cusp and curve shapes could be coherently transferred to the volumes via a GPU based optimization process applying recent advances in neural style transfer (NST) to the voxels themselves. These transferred shapes were controlled by hand painted styles and parameters that could be animated as a way to enhance the character performance. A key benefit of this process was the final shape of the fire was decoupled from the underlying simulation. This allowed the simulation to focus on lower frequency motions and stability, while NST could provide the final touches of shaping, particularly around the silhouette. Users could modify the perceived speed of the NST patterns by modulating the velocity input, and control where the effect was strongest by masking the resulting vector field. For large reusable simulations intended to be seen from many views, NST proved impractical as the multiple stylization viewpoints added excessive computational cost. To solve this, we trained a convolutional neural network to approximate the optimization process for stylization on representative volumes using multiple viewpoints. Then the much cheaper feed-forward neural network could be used, allowing us to bring NST to bear on larger scale environment simulations as well as characters.

Paper (PDF)

SIGGRAPH Talks 2023