Workflow Case Studies


The production workflows are discussed below, but first the integration of USD with our pipeline applications is discussed to provide background.

A single USD rendering service is used in all applications and contexts, both for interactive preview and for rib generation. The interactive portion of the rendering library is written in OpenGL, leveraging modern GPU hardware when available, such as hardware shaders and instancing. The primary goal of this rendering service is efficiency, as it will be used for drawing the lightest-weight representation of an asset. For this reason, it must also identify model instances, so vertex buffers can be shared when possible. References provide an ideal mechanism for identifying model instances; whenever a reference has nothing other than a rigid transformation, it is marked as a candidate for instancing.

In other cases, the asset may be loaded using the native application API, in which case the host application is responsible for drawing. Alternatively, a hybrid integration can be deployed: for example in Katana, the high-performance imaging library is used until the user requests to see the individual components of an object (such as mesh faces), in which case the responsibility for rendering is transferred back to the native application, Katana in this example.

In the workflows below, USD is named as the backing technology, however as of 10/2013, these workflows are largely implemented using Pixar's predecessor to USD, TidScene.

Animation

A pose cache is generated nightly for every shot in a production. This shot is stored in a well known location, available to all machines on the network. In Presto, animators can switch the representation of their animated characters to a demoted stand-in, which references this nightly pose cache. As a result, they can see all animation submitted from the previous day. Furthermore, animators are able to load these shots and render them efficiently because rendering is deferred to the USD rendering library. In Presto, only the model root is exposed along with a gprim pointing at the model in the pose cache.

Articulation/Rigging

"Menva", one of the progenitors for USD, is the scene description for Presto rigging; it is a superset of the features encodable in USD. Because different rigging and animation packages differ in their approaches and data-models so substantially, we do not feel confident about trying to propose/define a standard rigging data-model at this time. Therefore, the USD data-model and schemas are currently restricted to geometry and shading.

However, rigging can always be layered on top of the "universal" geometry and shading. In our own deployment of USD, one references an seet into a collection by referencing its "top-level" USD file, which will in turn reference (through a payload), the geometry and shading USD files, but also contains an attribute that identifies the Menva file containing the asset's Presto rigging. When Presto encounters these USD references, it triggers a plugin that imports the USD with some procedural logic that finds the rigging layer and delivers it to Presto to interpret and compose onto the geometry and shading. The procedural logic in the plugin also allows Presto to consume the asset in multiple ways, either cached (only a single native Presto prim is created, that represents the entire asset, with proxy preview drawing), collapsed (mulitple proxy prims created for rigidly transformable sub-sections of the model), or represented as fully riggable geometric objects. The most basic use case is for sets and props, which only have rigid transformations. In these cases, geometry can be exported from the modeling package and referenced directly into presto for preview.

Set Dressing

Set dressing is performed in Maya or Presto, where object instances are created, positioned, parented, and renamed. The USD representation allows the user to see a very light-weight version of the object with its geometry collapsed. This is a big improvement for set dressing, dramatically reducing the load time of shots.

Additionally, the camera baked into the pose cache can be used to transfer the various cameras between Presto and Maya. Cameras typically have non-trivial rigging, so only a baked representation of the camera is transferred between applications.

Simulation

The majority of simulation is done offline and integrated into the shot via Presto, however some simulation can either be preformed interactively from a procedural prim in the USD scene or can be pre-computed offline and baked to a USD representation and layered over the model's rest pose. Trees & vegetation make use of both cases. Wind or other effects can be computed offline and baked into simulation clips that can be applied to the curve hierarchy of vegetation or simple keep-alive motion can be created interactively and layered via the procedural prim.

Additionally, the communication protocol between Presto and the simulator can be expressed using USD. The geometry along with simulation parameters are baked into a USD file which the simulator consumes, thereby decoupling the simulator from the animation, modeling or lighting package.

Maya-Houdini Linkage

The specific workflow involves Maya and Houdini, but in general it is sometimes desirable to use two applications interactively, with one as the master and the second used as a way of processing geometric data. In this case, a model is being created in Maya, but the terrain is a procedural function that can be expressed easily in Houdini. The procedural generation is driven by Maya primitives that are sent to Houdini via USD. This system provides a loose coupling of the two packages along with interactive preview and does not require a model build.

This workflow is currently in development for use in a future film.

Crowds

Animation for crowds is created on a character-by-character basis, from which animation clips are baked to USD. These clips are then sequenced using finite state machines or external tools (e.g. Massive). The playback sequence makes use of the time scale and offset parameter of references, which allows the sequencer to select specific frame ranges from the original animation clips. Furthermore, the referenced file, time offset and time scale are animated over time using the animated value reference feature. The size of the animation data in the final shot is considerably smaller compared to baking out the animation for every frame.

This was a key strategy for achieving the scale of crowds on Brave and is also being further developed by MU.

Camera Pruning

A shot USD file can be processed to provide camera pruning information, both for set dressing and for compositing purposes. Pruning for set dressing is done early and promotes efficiency of the set design. The Activation feature of USD is used to disable large portions of the set that are out of camera. This type of pruning can be done in two passes, 1) an extremely fast pre-pass in which the USD file is used in isolation to perform frustum intersections or 2) by using the USD file in the context of lighting, which is slower but often necessary to find off-camera geometry that will cast shadows or reflections in the final frame.

FX

To apply effects to an established shot, the nightly shot USD file (pose cache) is referenced into Houdini. The unwanted scopes are pruned away, leaving only the geometry of interest. The geometry is edited, new geometry is created, shader prim vars are manipulated and finally a new USD file is produced. The new file only contains override values and additions over the original shot file, and the original shot USD file is referenced in at the root to provide the fallback values.

Additionally, FX will generate new USD files that do not reference the original shot. These are purely new geometry, often the result of simulation or procedural processes.

Feature Prototyping & Experimentation

When a technical director or researcher needs to prototype a new pipeline feature, often there is an existing USD file that provides a starting point for their prototype. For example, a new fracture simulator is currently being tested and the geometry for that experiment is provided by existing production USD files. For "The Good Dinosaur", procedural Hierarchical Subdivision edits were prototyped using USD assets from character models. Another example is an ongoing crowds technology experiment using existing production USD files to bake spherical harmonics and other signals point-clouds and brickmaps.

The ubiquity of a common geometry and referencing format along with simple Python bindings provide easy inter-application communication of production scene description, whereby new tools and workflows can be readily developed.

In particular, it is not solely the interchange format that enables these interactions, but the referencing mechanism. References reduce the need for expensive re-baking steps, thereby allowing layering to happen in any application or script.