Search Unity

Introducing Sherman (Part 2) - a Unity project featuring Real time fur, HDRP and Visual FX Graph for animators

June 11, 2019 in Games | 20 min. read
Share

Is this article helpful for you?

Thank you for your feedback!

Created by the Emmy-winning team that brought you Baymax Dreams, Sherman is a new real time Unity short that delivers the most advanced real time fur ever!

My name is Mike Wuetherick, and I am the Head of Tech for Unity’s Media and Entertainment Innovation Group. Shortly after joining Unity 3 years ago, I helped found an Innovation team dedicated to pushing Unity’s capabilities for CG Animation and Film. Since then, our team has had the pleasure of collaborating with Neill Blomkamp’s Oats Studios (Adam Episode 2 & 3), with Neth Nom on Sonder, and most recently the Baymax Dreams shorts with Disney Television & Animation.

This is the second part of our blog series about Sherman. Make sure to check out Part 1, where we talk about the creation, animation blocking, lookdev and camera layout for the short.

Table of Contents, Part 2

7. Advanced animation with Alembic
8. Lighting strategies for Linear Content
9. Fur & VFX
10. Filmic Motion Blur / Super Sampling
11. Unity Recorder

Advanced animation with Alembic

For the Baymax Dreams shorts, the team used the FBX file format as the primary format for transferring assets from Maya into Unity. Transferring data between DCC packages is always a challenge - exporting your content from a source format (say, Maya) to an external format is a lossy one by its very nature - however, there are advantages and disadvantages of working with the formats that are available.

FBX is the typical format used by most projects in Unity, with a well-defined, optimized workflow.  Using traditional bone-based animation with FBX is how animation in the vast majority of Unity projects is created, however, there are also limitations to this technique.

  1. Bone skin weighting limitation (prior to Unity 2019.1)
  2. Scale Compensation (for squash & stretch)

Skin Weighting

In Unity 2018.3 (and prior), Unity can weight each vertex with up to 4 bones, which makes advanced skin weighting much more complicated and problematic.  This limitation has been removed in Unity 2019.1 (which supports up to 256 bones per vertex), however, 19.1 was still in early Alpha during the development of the short, so we made the decision to stick with the 18.3 version for the Sherman production.

Scale Compensation (Squash & Stretch)

One of the first things you learn as an animator is Squash & Stretch (it is considered ‘by far the most important’ of the 12 principles of animation according to Wikipedia). The video below demonstrates Squash & Stretch in an early animation test for the Raccoon in the Sherman short. This video was created to test how the early fur implementation would work with the character’s animation & rig.

In order to achieve true squash & stretch animation, a technique called ‘Scale Compensation’ is used. During the Baymax Dreams project we achieved this with some rigging tricks (by flattening the rig hierarchy), but for Sherman, we wanted to try another approach - Alembic!

Solution: Alembic

Alembic (www.alembic.io) is a file format used commonly in VFX & Animation productions. It gained popularity as an interchange format to allow studios to migrate content from different software packages in a studio pipeline.  Unlike FBX, which uses skinned meshes and bones to translate & deform meshes, Alembic assets are baked vertex representations of the source model, and allow you to get ‘what you see is what you get’ copies of the source asset into any package that supports Alembic files.

The first implementation of Alembic for Unity traces back to the collaboration between Marza Animation Planet and Unity Japan for their short ‘The Gift’. Alembic was used to drive the giant ‘ball pit’ wave animation near the end of the short.  Additional features were added during the Adam Episode 2 & 3 projects when we were working with Oats Studios on the shorts as well.

Alembic Package out of Preview

In Unity 2019.1, the Alembic package for Unity is out of preview - meaning that it’s a fully supported format going forward. This is great news for anyone working on Cg Animation - being able to work with the format that you already know and use in your existing pipelines is a great benefit. Check the Package documentation for more information about the features that Alembic for Unity supports and how to get started with it in your own projects.

Alembic Challenges

Alembic is a fantastic format for many reasons - it’s WYSIWYG ability to transfer content from one package to another makes it ideal for baking animation from Maya for example - without worrying about special rigging or animation constraints that are imposed by FBX or other formats.

However, Alembic adds a number of challenges as well:

  1. File Size
  2. Attaching objects
  3. Material management

File Size

Each frame of animation in an Alembic file is a baked snapshot of the vertex positions for the model at that frame. This results in a fairly large amount of data that you are caching, and the higher-poly the mesh or longer the animation, the larger the resulting alembic file.

For Sherman, we ended up with a total of 7 ‘beats’ or sequences for the short. For each sequence, we output a  corresponding alembic file for each of the ‘characters’ in the scene (including the sprinkler, hose, gnome, etc). The resulting animation files in Alembic format are almost 7Gb of animation data. This is significantly larger than the same length of animation would be in the FBX format for example. The overhead of streaming this amount of data at real-time framerates is not insignificant, so this is something to be taken into account.

For example, we used Alembic to stream elements of the characters in the Adam episode 2 & 3 shorts (cloth, face animation), however the real-time constraint (wanting to play the animation back at 30 fps) required that we managed the detail level for the alembic in the scene in order to achieve this.

Attach points

Attaching elements like Lights or Reflection probes to animated objects in a scene is fairly common. With skeletal animation, this is a fairly trivial task, simply by embedding the attached object as a child in the scene hierarchy.  Cinemachine also depends on these local reference transforms to control where cameras are focused and targeted as well.  

When exporting to Alembic, however, the skeleton of the character isn’t included by default when you export the animation (exported as ‘Renderable only’).  we need to figure out a different approach in order to attach items in the proper position while the Alembic animation is playing. Luckily, Maya’s Alembic exporter provides a few other options that we can use to get more than just the renderable mesh data out.

Maya Alembic export options.

By simply unchecking ‘renderable only’, Maya also exported the full IK rig & skeleton/bone positions for the characters in addition to the rendered meshes, which allows us to use these positions to attach lights, reflection probes and VFX to the appropriate nodes.

Material Management

Another downside to Alembic is that the format does not store any material definitions in the format. Where FBX and most other formats include a definition of which textures are applied to which material, with Alembic, this information does not come across from the DCC (Maya for example) into Unity. This meant that every time an animation was re-exported to Alembic, we would need to remap materials for the particular animation. It was time consuming to keep dozens of alembic clips sync’d with their appropriate materials throughout the production.

We wrote several tools during Sherman to help simplify material mapping for Alembic files. Two of them are available in the new Film/TV Toolbox package that is available on Github, or as a package on Unity’s Package repository.

The Film / TV Toolbox package in Unity’s Package Manager.

For more info:

USD & the Future

There have been some very promising developments in this area that are showing remarkable promise. Pixar’s open-source USD format has been gaining traction over the past few years, and recently, Unity released the first version of our support for USD via the Package Manager.

The team is very excited about the possibilities that USD offers, and we’ll definitely be keeping an eye on this format for future productions.

Lighting strategies for Linear Content

Lighting in real time is very different than using a traditional offline renderer. While the basics are the same (you place lights in a scene etc), the technical approaches vary quite significantly.

Real-time lighting in a short like Sherman is achieved through a combination of techniques, including light probes, reflection probes (both planar and cubic), cascade shadow maps and baked global illumination.

When approaching the lighting for a project like Sherman, we look to our lighting supervisor, Jean-Philippe (JP) Leroux.

First to achieve real-time, certain aspect of the lighting solution required to be pre-calculated. To do so, sets require a bit of upfront preparation.

Global illumination

Indirect lighting needs to precalculate to achieve high quality localized ambient lighting. All large static objects are marked for contribution to the solution and are lightmapped. We did this using the progressive lightmapper Baked Indirect mode.

All small objects and dynamic one will be lit by a probe array. Larger objects not suitable for lightmapping will benefit from a more refined probe lighting through the use of proxy volumes.

To be clear, we didn’t bake the lights, only the contribution of the sky and the bounce of our directional.  

Reflections

You also need to set up a good coverage of your set for baked localized reflections, using Reflection Probes.

Placing your capture point mostly at camera level. Since our sky with his sun close to the horizon has a very strong directionality, we had to take an extra step to correctly cover the shadowed area.

Some highly reflective objects greatly benefit from real-time reflections. In our case, the metallic food bowl and the very shiny bubbled hose pipe: The first used a spherical real-time reflection probe, the second used a planar reflection.

Lights

From this point, all lights are purely dynamic, their properties can be authored and they can be moved around with instant feedback. One workflow trick we have is the use of a pivot object for light placement. Your pivot act as your target. Once positioned, it allows you to easily orbit around your subject by switching between local and global orientation using the “x” key.

It is also important to say that all lights were casting shadows.

Working with prefabs has many advantages.

  1. It allows you to work on a single object without holding on to the scene, letting others works in it at the same time   
  2. It allows you to rapidly propagate changes through your sequence\project
  3. It allows you to quickly revert overridden value  

For Sherman, we created a master prefab per beat and nested prefabs for:

  • The sun
  • Fill sun
  • Fill sky
  • Rim sun
  • Rim Sky
  • Catchlight

Our master prefabs is structured per shot and we use activation tracks in Timeline to trigger them.

Inside of them, you will find many things like shots specific lights, Volumes that override certain global lighting property, density volumes to drive atmospheric lighting, shadow objects.

Other properties like post-processing are tweaked via Cinemachine Post Processing clips and also via the timeline. Things like the grading, the camera effects and other optimization like the camera culling.

Custom Post Profiles for each shot, applied with the CinemachinePostProcessing component to individual Virtual Cameras.

Fur & VFX

With the Animatic in hand, we quickly identified several key elements that would be critical to the success of the short:

  1. Water VFX
  2. Fur for the Raccoon

As the sprinkler plays a fairly ‘hero’ role in the short, one of the early challenges was figuring out how we wanted to tackle the fluid simulation. There are a number of traditional methods of handling fluid simulation for animation, including Houdini, or the fluid systems available natively in Maya. Steven made a few early tests using Maya for the fluid effects, but as he isn’t super familiar with the Maya fluid system or Houdini, we were concerned that the amount of work involved might not achieve the results we were after.

Around the same time, Unity released the new Visual Effect Graph, a new node-based GPU-accelerated effects system. While it looked promising, we did not have any experience with it, or how we might be able to achieve the results that we were looking for. Luckily for us, Vlad Neykov (Lead Graphics Test Engineer based in Brighton) was able to jump in and achieve some amazing results.

Dynamic Effects with the Visual Effect Graph

With 2018.3 the new Visual Effect Graph was released in preview for HDRP.  For the Baymax Dreams shorts, the team used the legacy particle system combined with real-time physics (for destruction). While none of our team had ever used this before, we knew that in order to achieve the advanced water simulation and visuals that we wanted, we would require the advanced particle features and HDRP integration that the new Visual Effect Graph provides.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

The team asked around within Unity for anyone with experience using the Visual Effect Graph, and were lucky to be able to borrow the talents of Vlad Neykov to tackle the gorgeous water & other effects seen in the short.

All of the VFX were managed in their own Timeline sequence, which allowed Vlad to iterate on the timing and animations separately, and used a combination of the custom ‘Visual Effect Activation Track’ that ships with the Visual FX Graph and traditional animation tracks to animate properties of the graph.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

For the sprinkler effect, Vlad started with the idea of one water effect to rule them all but slowly kept expanding it to include new cases (first shot, water comes from the camera side, have to add control to it. Another shot, water bounces off the food bowl, have to add collision for it but hide it in the other shots. Another shot, depth collision didn't collide close enough, replaced with sphere collision, etc). In the end, almost every shot in the production has a separate water effect to handle the specific situations.

Vlad initially handled the water collision using screenspace depth tests, but after testing the various situations, he replaced them with simple plane/sphere collision representations for the various collision volumes.

Initially, our main concern was how to achieve the dynamic water effect for the sprinkler, but Vlad ended up tackling all of the FX for the short, including small details like the leaves falling from the hedge, dust clouds for footsteps, the dirt explosions and electricity zaps. The project was a great real-world test case for the VFX Graph and generated a ton of feedback for the team as they continue to develop the VFX Graph.

Overall, we are very excited about the potential that the VFX Graph brings to Unity, and are grateful to Vlad for going above and beyond to support the project.

Tackling Real-time Fur (and feathers)

We had a few technical goals that we wanted to solve while producing the short, but one of (if not the) biggest that we chose to tackle for this production is a topic that comes up frequently when talking to studios about using Real-time engines for animation - Fur.

Initially, we were hesitant to tackle Fur rendering. While we had full confidence in John’s ability as a graphics engineer, achieving good ‘offline quality’ Fur in Real time is a significant challenge.

When discussing Hair or Fur in any engine, there are 4 key elements that make up the solution:

  1. Geometry generation
  2. Shading
  3. Dynamics
  4. Authoring

For the fur on Sherman, we tackled the first 2 (geometry generation & shading), and did some experimentation with Dynamics (physics) but ultimately decided not to use this for the final short. The final key aspect is the actual Authoring of the fur - providing a way for artists to get hands-on with the fur is critical to its final appearance.

The first thing that the team did is evaluate what existing work had been done with Fur rendering. One of the first ‘fur’ rendering implementations in Unity was created by Marza Animation Planet, for their short ‘The Gift’.

Every year Unity holds a company-wide ‘Hack Week’, where most of the engineering team at Unity get together to collaborate on new & interesting projects, trying out experimental ideas and otherwise ‘hacking’. Last year, one of the team's sought to continue the work that was done on the Gift short and see what other possibilities there might be for Real-time fur in Unity.  The team ported the Marza fur to HDRP (among other things), and provided the foundation for the work that the Sherman team used to build our fur solution.

From the base implementation that the Hack Week project provided, John and Steven started work on the workflow for authoring the Fur.

I’m not going to get into too much technical detail about the specifics (I won’t do it justice), however, there are a few interesting elements to the approach that the team took that I’d like to cover.

Source Fur Mesh / SDF

One of the first things that Steven did for the fur is actually model a patch of the fur, as individual hair strands. This was baked into an SDF (signed distance field), which was used as the source for the fur geometry volume itself. By having this high-resolution source for the fur, the fidelity of the fur is much higher than a pure hull based approach. For example, the normals can be calculated per-strand, resulting in much better lighting than most existing real-time fur implementations. As we progressed with the implementation, a second analytical SDF was incorporated for the fur ‘overcoat’ allowing us to blend the two.

Source Fur Mesh (used to generate the SDF).

The undercoat uses the baked SDF, and the overcoat uses the analytical SDF, so there is an option of picking the sdf that suits your needs. The analytical sdf's give you unlimited resolution and in the future the ability to modify the strand profiles/properties directly inside of Unity. Baked SDF's allow you to bake more complex geometry (eg feather geometry/fur styles that are impractical or difficult to achieve analytically and give you the predictability of replicating what you make in your DCC app of choice.

Groom Maps / Height Map

Fur by itself is great, however, animals like Raccoons don’t have perfectly straight fur sticking out of their body from all points. Additionally, fur isn’t typically the same length uniformly across their entire body.

To achieve a good result, having the ability to author groom maps that can be applied to modify the fur normals and geometry was very important. Steven used Mari to generate the Groom maps, and also generated Height maps to control the length of the fur for the different areas of the Raccoon’s body.

Filmic Motion Blur / Super Sampling

The team created a large number of custom tools during the Sherman production. Most are simple in concept but provide significant workflow and time savings for the team. One of the critical pieces of tech that was developed is a system that we’ve dubbed ‘Filmic Motion Blur’.

During the Baymax Dreams project, one of the major technical challenges that we needed to solve was motion blur. The current ‘real time’ state of the art for motion blur looks great at high frame rates when playing a game, however for an offline render (like what we required for broadcast TV at 24fps), created artifacts, which were found to be unacceptable for broadcast.

In order to pass Disney’s quality standards, John Parsaie developed the Filmic Motion Blur system. At its core, Filmic Motion Blur is an accumulation-based renderer. Instead of rendering at 24 fps, the timeline sequence is rendered at 960 fps and the intermediate buffers are accumulated into the actual frame that is finally written to disk. Since the effect is simply combining the results of existing render buffers, it works with any materials or shaders without modification.

For Sherman, this same technique is used to converge the fur samples into the final frames as well. In the future, we’re investigating ways to use this same technique to create high-end cinematic depth of field and other super-sampled effects.

This approach to rendering the frames is not without tradeoffs, the biggest being that it does not run at real-time FPS. In fact, at 4k with full detail & fur, we’re starting to talk in the ‘frames per minute’ range to outputting the final frames, but this is still exponentially faster than existing offline CPU and GPU renderers that can take hours to render a final frame.

Render Window

The Filmic Motion Blur really boosts the final image quality significantly, it was a huge win. The biggest downside is that in order to see the final result with all of the super sampling & convergence, the team had to render the sequence or shot that they were working on.

In order to solve this problem, we created the Render Window, a custom editor window that anyone on the team can use to output a final quality render on demand. This allowed Steven to tune Groom maps and tweak the fur as needed while still being able to quickly see the final result.

Side by side of the alpha dithered fur and the final converged frame.

Recorder

Working on projects like Sherman is different than a typical Unity project. The end result is sequenced frames that are output from the editor that we make up the final animation. For this, we used Unity’s Recorder package to render the final images out of the editor.  The Filmic Motion blur feature described above hooks into the Recorder’s Timeline integration to allow us to converge the fur, and create the super-sampled motion blur for the final frames.

Unity’s Recorder Track & Clip settings

The final frames were rendered at 4k on 3 dedicated PC’s that the team used as a mini-render farm (so they didn’t need to use their main workstations to render). Every night we would render a new ‘daily’ at final quality of the progress for the team to review.

Summary

This wraps the second part of our blog series on Sherman. I hope that you enjoyed hearing about the production and the techniques and tricks that our team used to bring the project to life. Check out Part 1 for a deep dive into how we used Alembic for the character animations, and to learn more about the Fur implementation that we created for Sherman as well as some of the additional tools that the Innovation Group created to help team's working on Linear Animation with Unity!

We are very excited about Sherman, and look forward to hearing more about how you are using Unity for Animation!  Oh, and one final note: Sherman isn’t the Raccoon, he’s actually the cute fluffy Bird!

If you are interested in learning more about how Unity can be used for your animation projects, Unity's EDU team can provide private on-site training workshops that can be fully customized for your or your team’s needs.  Each workshop is led by a Unity Certified Instructor and features hands-on projects that teach Unity skills as well as best practices for implementation. You can also jump into our forums to discuss this blog post here.

New to animation? Check out our beginner tutorials. If you’re an experienced animator, check out our intermediate and advanced content on the Unity Learn Premium platform.

For more information about Sherman, including access to the full project, go to our Film Solutions page, and get in touch to discuss how Unity and the Innovation Group can help bring your projects to life!

June 11, 2019 in Games | 20 min. read

Is this article helpful for you?

Thank you for your feedback!