Search Unity

In this blog post we’ll be sharing our animation pipeline, and our approach to the post effects and audio and video output in the short film The Blacksmith. You can also check out our previous ‘Making Of’ post about scene setup, shading, and lighting, as well as our dedicated page for the project, where all the articles in this series are collected.

Animation pipeline

Starting With a Previz

Once we knew the direction we were moving in terms of art style and themes for The Blacksmith, we were eager to see our short film in action, so we set off to build an early version of it, which we could use as a starting point.

We made early mesh proxies of the characters, supplied them with temporary rigs with very rough skinning, and quickly blocked shots out in 3D, so that we could work on a previz.

This approach provided us with a sense of timing and allowed us to iterate on the action, camera angles and editing, so we were able to get an idea about how our film was going to flow, even very early in the process. From there on, it was the perfect way to communicate the goals to the external contractors who helped us on the project, and to guide the production process.

The cameras and rough animation from the previz were used as a placeholder going forward.

Motion Capture

For the performance of the characters, we worked with Swedish stunt- and mocap-specialists Stuntgruppen, who were not only able to carry out the motion capture performance, but also consulted us on the action our characters would engage in. We held some rehearsals with them before moving on to the mocap.

We had a very smooth shooting session at the venue of Imagination Studios in Uppsala. The previz was instrumental at the motion capture shoot as it served as a reference for everyone involved. What made things even easier was that we were able to see the captured performance retargeted directly onto our characters in real time during the shoot.

The Pipeline From End-to-End

Our animation pipeline was structured as follows:

  • Rigs were prepared in Maya with HIK and additional joints for extra controls.
  • Rigs were exported via fbx to Motionbuilder.
  • Mocap was applied and then remapped to rigs in Maya.
  • Facial Animation was applied.
  • Simulation of Hair implemented on extra joint chains, such as hair and cloth.
  • Additional props were finally animated over this, and some fine tuning was done
  • before export of fbx to Unity.
  • Rigs were all imported under the ‘Generic’ rig type, with takes added to these via joint name mapping in Unity’s timeline/sequencer prototype tool.
  • Cameras were animated mainly in Motionbuilder; then imported takes were attached to the render camera in Unity, via takes on the sequencer timeline.

Setting Up In Unity

Once in Unity, we had to set-up our rigs before we could move forward. ‘Generic Rig’ setting was used for the two master rig files. We then relied upon auto-mapping of joints referencing the two master rig files for scene optimization and organization.
Next we used the Scene Manager to shift from one shot setup to the next, but each shot always referenced one of the two master rigs. For this naming of joints had to match 1/1 for retargeting to work effectively. Shots were then sourced from many fbx files, and were setup along the timeline pointing to the Master Characters in scene. The timeline then generally consisted of these playable items: Character Track, Camera Track, Prop Track, Event Track, Switcher/Shot Manager Track.

Approaching Cloth, Hair and Additional joints

It was also important to us to ensure we had some nice movement on our characters’ clothes and hair. For that reason we ended up having additional joints placed into the skirts of both characters, and the hair of our Blacksmith lead. The setup allowed the simulation to be baked back onto the joints that were skinned into the rigs, so it was effectively pseudo cloth with a lot of extra bones baked. This was an effective approach for us, as we had a really high target of 256 bones and four bone weights per Character.

ChalControlJointsChalControlXtraChalControlAll

The joints visible in the face drive the eyes for ‘look at’ control, and the
rest around the jaw actually bind the beard to the face vertices using this riveting method in Maya.

This was done purely because we needed to keep the beard and hair as separate elements in Unity for the custom hair shader to work. The facial animation is blendshape
driven, and the custom wrinkle map tech also relied on the blendshape values. That meant
we had to use the rivet approach and bake the joints to follow the blendshapes on export.

Assembling the Mocap Scene in-house

Post-shoot we reconstructed the mocap in Motionbuilder using ‘Story’ mode.

At this point the Demo team’s animator took the lead with content, refining the cameras around the newly produced near-final performances. Being the most intimate with the intention of the scenes it made sense that the animator edit the sequence back together into master scene files. Additionally, any further ideas that may have come up during the motion capture shoot could be incorporated at this stage.

The overall movie was divided into only a few master scene/fbx files, namely ‘Smithy to Vista’, ‘Vista to Challenger’, ‘Challenger and Blacksmith Battle’ and ‘Well Ending’. This grouping of shots simply made the file handling much easier. Each of the fbx files contained all the content needed to final the performances, and once completed were exported in full to Unity. After that, the take/frame ranges were split inside the Unity animation importer.

Next, all the content from previz was replaced with the motion capture. This allowed for ‘nearly final’ animation to be implemented into Unity. At this stage it was important to get it in quick, thus allowing art, lighting and FX to forge ahead with their own final pass on content.

FixingCapture

After this was done the performances were refined and polished, but the overall action and camera work didn’t noticeably change that much. Once the body work was finaled and locked off, the animator took all the content back to Maya for the final pass of facial animation, cloth and hair simulation, as well as the addition of props and other controls.

Final Exports to Unity

At this stage in the production the time had come to overwrite all placeholder files with final animation assets. The sequence structure and scene in Unity stayed pretty well intact, and it was mostly just a matter of overwriting fbx files and doing some small editing in the sequencer.

The following video provides an opportunity to compare and contrast a brief cross sections of the stages of production, from storyboard to the final version.

Camera effects

Since we wanted to achieve cinematic look throughout this project, we ended up using some typical film post effects in every shot. We used several of Unity’s default effects: Depth of Field, Noise and Grain, Vignette and Bloom.

PostEffects

We developed a custom motion blur because we wanted to experiment with the effect that camera properties such as frame rate and shutter speed had on the final image. We also wanted to make sure we could accurately capture the velocities of animated meshes; something that involved tracking the delta movement of each skeletal bone in addition to the camera and object movements in the world.

To give a good sense of depth and scope in our large, scenic shots, we strived to have a solution to achieve a believable aerial perspective. We decided to develop a component for custom atmospheric scattering, as a drop-in replacement for Unity’s built in fog modes. After some initial experimentation based on various research papers, we opted to go for extensive artistic control over correctly modeling the physics of atmospheric scattering.

Here is an example of how our camera settings look in a randomly selected shot from the project:

atmospheric_scattering_UI

atmospherics_02

To further our goal of achieving a cinematic look, we wanted to simulate a classical film out process, i.e. printing to motion picture print film. We imported screenshots from Unity into DaVinci Resolve – an external application used for professional video production – where the color grading and tone mapping took place. There we produced a lookup table, which was imported and converted to a custom Unity image effect. At runtime, this image effect converted linear data to a logarithmic distribution, which was then remapped through the lookup table. This process unified tone mapping and grading into a single operation.

FujiD55

Audio

The audio composition was a single track which was imported in Unity and aligned to the Sequencer timeline. The music composer was given an offline render – ‘preview’ – of the movie to use as a reference when composing and synchronising the track. In order to achieve the desired mood of the piece, he used the song ‘Ilmarinen’s Lament’, which we licensed from American-Swedish indie musician Theo Hakola, and enhanced it with additional elements he composed and recorded himself.

Video output

We wanted to produce this short film in its entirety inside of Unity, without the need for any external post-processing or editing to finalize it. To achieve this, we put together a small tool that would render each frame of the movie with a fixed timestep. Each of these frames would then be piped in memory to an H.264 encoder, multiplexed with the audio stream, and written to an .mp4 on disk. The output from this process was uploaded straight to YouTube as the final movie.

And that’s it for this blog post. Stay tuned for more, and in the meantime make sure to check in at our special page for The Blacksmith, if you haven’t done so already. There you’ll find all the information we have already published about how we brought our short film to life.

17 Comments

Subscribe to comments

Comments are closed.

  1. VIDEO OUTPUT

    Can you Explain the Video Output of Unity.
    Please try to create a tutorial.

  2. Thanks for information. Little Request…..
    Can we get this Animations mocaps. Where?

  3. Thanks for sharing the useful information. To know more details about animation please feel free to visit brandepix blog.
    http://www.brandepix.com/

  4. tarwla2005@hotmail.com

    June 25, 2015 at 4:25 pm

    salut com ca va

  5. Is there any reason why the atmospheric scattering is included in each object shader rather than as a post process?

    1. Torbjorn Laedre

      June 25, 2015 at 4:05 pm

      Hi.

      Both options are available. We use special shader variants to run most of the calculations per-vertex which makes it a lot faster in forward mode. If you’d like to run it as a post-process with any ol’ shader instead, there’s an option for that (Force Post Effect); you’ll have to add the AtmosphericScatteringDeferred image effect to the camera for that to work. In deferred mode, this is the default and only option. The exception is transparent objects, which do require custom shaders since they don’t write depth.

      1. Thanks, I found the AtmosphericScatteringDeferred script, but I can’t find the option “Force Post Effect” there, or in any of the other scripts and shaders.

        1. Torbjorn Laedre

          June 26, 2015 at 8:05 am

          If you’re looking in the environments package, you might wanna look in the dedicated atmospherics package instead. These features might not be 100% in sync across packages since they weren’t finalized at exactly the same time.

  6. Please share Blacksmith Demo Projec with all scripts and effects developed for it in public as sample of work in Unity. So we have good samples of how to achieve such effects in our games.

    Now we have no samples of good graphics builded on Unity and how to achieve all this effects.

    1. I have a hunch that they’ll release the project files once the cinematic director gets patched in.

  7. I am especially interested in the atmospheric scattering. Will this be released at some point?

    1. It’s on the asset store now :-)

  8. Very awesome production! You can get sequencing tools now while we wait – https://www.assetstore.unity3d.com/en/#!/content/19779

  9. Anthony Madden

    June 22, 2015 at 7:57 pm

    Why not release the H.264 encoder you guys wrote for unity? I fell that would be extremely helpful when creating live gameplay trailers in the Unity editor.

    1. Torbjorn Laedre

      June 24, 2015 at 5:54 pm

      Hi Anthony. This probably requires some clarification: the tool we wrote was more like a framework for capturing all the data from the many cameras we had in Unity – it doesn’t handle movie encoding itself. The data we captured from each frame was optionally processed and converted, and then piped to whatever the active output mode was set to (which could be an automatically spawned ffmpeg process for movie encoding, or just to disk for sequences of LDR or HDR images).

    2. You might have some luck with the uRecord asset by Well Fired. They offer a free watermarked version if you want to test it out first.
      https://www.assetstore.unity3d.com/en/#!/content/9154

      But I agree, a built-in solution for recording footage (either to an external encoder or to disk) would be a welcomed addition to Unity, making it easier to capture gameplay footage for trailers, or even as a way for users to create more films and machinima entirely through Unity.

      1. For whatever reason, that link isn’t working correctly. Let’s try that again.
        https://www.assetstore.unity3d.com/en/#!/content/9154