Search Unity

Making of The Blacksmith: Animation, Camera effects, Audio/Video

June 22, 2015 in Technology | 8 min. read
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

In this blog post we’ll be sharing our animation pipeline, and our approach to the post effects and audio and video output in the short film The Blacksmith. You can also check out our previous ‘Making Of’ post about scene setup, shading, and lighting, as well as our dedicated page for the project, where all the articles in this series are collected.

Animation pipeline

Starting With a Previz

Once we knew the direction we were moving in terms of art style and themes for The Blacksmith, we were eager to see our short film in action, so we set off to build an early version of it, which we could use as a starting point.

We made early mesh proxies of the characters, supplied them with temporary rigs with very rough skinning, and quickly blocked shots out in 3D, so that we could work on a previz.

This approach provided us with a sense of timing and allowed us to iterate on the action, camera angles and editing, so we were able to get an idea about how our film was going to flow, even very early in the process. From there on, it was the perfect way to communicate the goals to the external contractors who helped us on the project, and to guide the production process.

The cameras and rough animation from the previz were used as a placeholder going forward.

Motion Capture

For the performance of the characters, we worked with Swedish stunt- and mocap-specialists Stuntgruppen, who were not only able to carry out the motion capture performance, but also consulted us on the action our characters would engage in. We held some rehearsals with them before moving on to the mocap.

We had a very smooth shooting session at the venue of Imagination Studios in Uppsala. The previz was instrumental at the motion capture shoot as it served as a reference for everyone involved. What made things even easier was that we were able to see the captured performance retargeted directly onto our characters in real time during the shoot.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

The Pipeline From End-to-End

Our animation pipeline was structured as follows:

  • Rigs were prepared in Maya with HIK and additional joints for extra controls.
  • Rigs were exported via fbx to Motionbuilder.
  • Mocap was applied and then remapped to rigs in Maya.
  • Facial Animation was applied.
  • Simulation of Hair implemented on extra joint chains, such as hair and cloth.
  • Additional props were finally animated over this, and some fine tuning was done
  • before export of fbx to Unity.
  • Rigs were all imported under the ‘Generic’ rig type, with takes added to these via joint name mapping in Unity’s timeline/sequencer prototype tool.
  • Cameras were animated mainly in Motionbuilder; then imported takes were attached to the render camera in Unity, via takes on the sequencer timeline.

Setting Up In Unity

Once in Unity, we had to set-up our rigs before we could move forward. ‘Generic Rig’ setting was used for the two master rig files. We then relied upon auto-mapping of joints referencing the two master rig files for scene optimization and organization.
Next we used the Scene Manager to shift from one shot setup to the next, but each shot always referenced one of the two master rigs. For this naming of joints had to match 1/1 for retargeting to work effectively. Shots were then sourced from many fbx files, and were setup along the timeline pointing to the Master Characters in scene. The timeline then generally consisted of these playable items: Character Track, Camera Track, Prop Track, Event Track, Switcher/Shot Manager Track.

Approaching Cloth, Hair and Additional joints

It was also important to us to ensure we had some nice movement on our characters’ clothes and hair. For that reason we ended up having additional joints placed into the skirts of both characters, and the hair of our Blacksmith lead. The setup allowed the simulation to be baked back onto the joints that were skinned into the rigs, so it was effectively pseudo cloth with a lot of extra bones baked. This was an effective approach for us, as we had a really high target of 256 bones and four bone weights per Character.

ChalControlJoints
ChalControlXtra
ChalControlAll

The joints visible in the face drive the eyes for ‘look at’ control, and the
rest around the jaw actually bind the beard to the face vertices using this riveting method in Maya.

This was done purely because we needed to keep the beard and hair as separate elements in Unity for the custom hair shader to work. The facial animation is blendshape
driven, and the custom wrinkle map tech also relied on the blendshape values. That meant
we had to use the rivet approach and bake the joints to follow the blendshapes on export.

Assembling the Mocap Scene in-house

Post-shoot we reconstructed the mocap in Motionbuilder using ‘Story’ mode.

At this point the Demo team’s animator took the lead with content, refining the cameras around the newly produced near-final performances. Being the most intimate with the intention of the scenes it made sense that the animator edit the sequence back together into master scene files. Additionally, any further ideas that may have come up during the motion capture shoot could be incorporated at this stage.

The overall movie was divided into only a few master scene/fbx files, namely ‘Smithy to Vista’, ‘Vista to Challenger’, ‘Challenger and Blacksmith Battle’ and ‘Well Ending’. This grouping of shots simply made the file handling much easier. Each of the fbx files contained all the content needed to final the performances, and once completed were exported in full to Unity. After that, the take/frame ranges were split inside the Unity animation importer.

Next, all the content from previz was replaced with the motion capture. This allowed for ‘nearly final’ animation to be implemented into Unity. At this stage it was important to get it in quick, thus allowing art, lighting and FX to forge ahead with their own final pass on content.

FixingCapture

After this was done the performances were refined and polished, but the overall action and camera work didn’t noticeably change that much. Once the body work was finaled and locked off, the animator took all the content back to Maya for the final pass of facial animation, cloth and hair simulation, as well as the addition of props and other controls.

Final Exports to Unity

At this stage in the production the time had come to overwrite all placeholder files with final animation assets. The sequence structure and scene in Unity stayed pretty well intact, and it was mostly just a matter of overwriting fbx files and doing some small editing in the sequencer.

The following video provides an opportunity to compare and contrast a brief cross sections of the stages of production, from storyboard to the final version.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Camera effects

Since we wanted to achieve cinematic look throughout this project, we ended up using some typical film post effects in every shot. We used several of Unity’s default effects: Depth of Field, Noise and Grain, Vignette and Bloom.

PostEffects

We developed a custom motion blur because we wanted to experiment with the effect that camera properties such as frame rate and shutter speed had on the final image. We also wanted to make sure we could accurately capture the velocities of animated meshes; something that involved tracking the delta movement of each skeletal bone in addition to the camera and object movements in the world.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

To give a good sense of depth and scope in our large, scenic shots, we strived to have a solution to achieve a believable aerial perspective. We decided to develop a component for custom atmospheric scattering, as a drop-in replacement for Unity's built in fog modes. After some initial experimentation based on various research papers, we opted to go for extensive artistic control over correctly modeling the physics of atmospheric scattering.

Here is an example of how our camera settings look in a randomly selected shot from the project:

atmospheric_scattering_UI
atmospherics_02

To further our goal of achieving a cinematic look, we wanted to simulate a classical film out process, i.e. printing to motion picture print film. We imported screenshots from Unity into DaVinci Resolve – an external application used for professional video production – where the color grading and tone mapping took place. There we produced a lookup table, which was imported and converted to a custom Unity image effect. At runtime, this image effect converted linear data to a logarithmic distribution, which was then remapped through the lookup table. This process unified tone mapping and grading into a single operation.

FujiD55

Audio

The audio composition was a single track which was imported in Unity and aligned to the Sequencer timeline. The music composer was given an offline render – ‘preview’ – of the movie to use as a reference when composing and synchronising the track. In order to achieve the desired mood of the piece, he used the song ‘Ilmarinen’s Lament’, which we licensed from American-Swedish indie musician Theo Hakola, and enhanced it with additional elements he composed and recorded himself.

Video output

We wanted to produce this short film in its entirety inside of Unity, without the need for any external post-processing or editing to finalize it. To achieve this, we put together a small tool that would render each frame of the movie with a fixed timestep. Each of these frames would then be piped in memory to an H.264 encoder, multiplexed with the audio stream, and written to an .mp4 on disk. The output from this process was uploaded straight to YouTube as the final movie.

And that’s it for this blog post. Stay tuned for more, and in the meantime make sure to check in at our special page for The Blacksmith, if you haven’t done so already. There you’ll find all the information we have already published about how we brought our short film to life.

June 22, 2015 in Technology | 8 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered