Search Unity

Sherman behind the screens: How to create cinema-quality lighting in Unity

November 13, 2019 in Industry | 18 min. read
Share

Is this article helpful for you?

Thank you for your feedback!

Created by the Emmy Award-winning team that brought you Baymax Dreams, Sherman is a new short film crafted with Unity’s High Definition Render Pipeline (HDRP).

My name is Jean-Philippe Leroux, and I’m the lighting supervisor for Unity’s Media and Entertainment Innovation Group. I joined Unity two years ago, and I’ve since been lucky enough to work on great projects like Neill Blomkamp’s ADAM: Episode 2 and Episode 3 and the Emmy® Award-winning Baymax Dreams.

If you’re reading this, you’ve probably heard about a major industry shift happening right now: the rise of real-time production.

Real-time animation is much more than a different rendering solution – it’s actually revolutionizing the entire production pipeline. Real-time creation engines (also known as game engines because they were initially created for video game development) have now achieved the quality and maturity they need to serve as a production hub for cinema-quality content creation.

But how is real-time rendering different than offline rendering? How do we go from hours of processing on a gigantic render farm to instant feedback and final 4K frames in mere seconds? There’s no magic involved. It all comes down to an amalgam of many simplifications, approximations, optimizations, and a few clever cheats.

Part of our goal in creating the animated short film Sherman was to showcase the HDRP’s ability to deliver some of the most advanced real-time furever. However, this post highlights how we achieved the real-time lighting solutions for Sherman in Unity, including concrete advice and tips that you can bring to your own projects. Whether you’re a 3D animation expert or a traditional lighting artist, this piece will give you the tools you need to start creating rich, nuanced lighting for real-time content in Unity.

Deconstructing Sherman

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

There’s a lot happening in the lighting of this frame from Sherman. Let’s deconstruct it to see how it all comes together. Using this shot as our example, we can tease out pretty much everything you’d need to know about lighting effects in Unity for your own projects. 

We can break the lighting effects used to create this image into four types, then discuss how to control each of them.

Mastering ambient lighting

The contribution of indirect diffuse lighting (Global Illumination) to our image

We started by defining the ambient lighting for the scene. We didn’t use a simple ambient term to emulate light scattering; instead, we took advantage of Unity Progressive Lightmapper to bake the Global Illumination results into lightmaps and create localized ambient lighting.

A view of our baked (indirect diffuse) lightmaps

Lightmapping

Our project is a multi-scene setup, and all of the static scene elements live under the episode_set scene game object.

Static scene elements in our frame.

Set lighting elements are in the episode_set_lighting scene, which is the active scene in Unity. This is important because the active scene determines where any lighting data generated will live. You can find more information on multi-scene workflows here.

We don’t need the 001_timeline scene to build our set lighting, and we will unload it during the process. If you look carefully, you might also spot a section labeled ToHide. Inside, there’s a camera and an instance of the sun’s directional lights. Some post-processing effects are important to lighting, like Tonemapping and Ambient Occlusion, but you’ll need a camera to display them. We use a temporary camera while lighting the set, but only one camera should remain when the project is fully loaded. Once baking is complete, the ‘ToHide’ debug camera and lighting should be hidden again. 

For any object in a given lightmap, the UVs should never overlap – each surface needs its own unique UV. Much of the time, you can let Unity generate lightmap UVs for you at import. (You can find more information on this topic here.) For complex and organic objects, it’s generally better to have an artist make a UV2 manually using their preferred digital content creation (DCC) software. Stitching seams is a simple way to achieve smoother results on edges that are not continuous in the lightmaps.

Unity Lighting panel

Let’s take a look at the Lighting tab and explore the settings used to create this scene in Sherman

  • First, we are using Mixed Lighting Bake Indirect with the Progressive CPU Lightmapper. We didn’t use Realtime Global Illumination because it will not be supported in HDRP.  
  • The GPU Lightmapper makes huge progress with every new version of Unity, but it was still missing key features we needed during the production of Sherman.
  • Personally, I prefer not using Shadowmask mode, which bakes shadows and most lighting into lightmaps for an animated production. By using Bake Indirect instead, the light calculations are kept simple, at a lower resolution – which means the baking time is faster, and we also have the flexibility of allowing small shot-by-shot corrections of our directional orientation. 
  • The setting for Direct Sample is irrelevant for the mode selected here because we’re not baking any direct lighting, so you can ignore this. 
  • Start low for your Indirect Sample setting, around 250, and increase it as you grow more confident in your scene setup. 
  • For the Bounces setting, two should be enough, but I advise increasing to four for interiors.
  • When defining your Lightmap Resolution, start low and gently scale up. In this scene, we chose a final resolution of ten, which gives us 100 texels for one square meter. Remember that lighting and physics require you to work in a 1 Unity unit = 1-meter environment.

It’s very important to be aware and take advantage of the Scale In Lightmap parameters of the individual meshes in a scene. For small objects with triangles too small to cover multiple lightmap texels, increase this value, and decrease it for larger objects that are far away. You can easily preview your lightmap resolution by enabling the Baked Lightmap view and Show Lightmap Resolution.

Not all objects are suitable for lightmaps. You shouldn’t mark complex, multi-surface objects for lightmapping, nor small objects that don’t contribute significantly to the Global Illumination or that will simply be too expensive to lightmap (for example, grass, leaves, small rocks, and debris). You can do this by ensuring that Lightmap Static is disabled.

Some large static objects aren’t really suitable for lightmapping, such as our hedges, but we still want them to contribute to the solution. There is a trick to have an entity taken into account without lightmapping it: set its Scale In Lightmap value to zero. For example, in the edge section, you can clearly see the occlusion the hedges create, even if they’re not lightmapped.

Scene view settings and a close-up of the hedge area lightmap

What about the rest? Light probes

Light probe array with a one-meter spacing

After establishing the ambient light and lightmapping settings, a probe array lights the remaining small and dynamic objects. For Sherman, we created an array of probes with a spacing of one meter between them.

Larger objects like the grass lawn benefit from more refined probe lighting through the clever use of Light Probe Proxy Volumes. This way, an object is lit by localized samples of the probes rather than by a single probe.

The impact of using Light Probe Proxy Volumes on large probe-lit objects. In this case, the grass is divided into large tiles that are lit by probes.

Valuable tip: In a multi-scene setup, it is important to know that only the light probes from the latest scene loaded will be taken into account.

Ambient occlusion

Now that we’ve explored how to generate great localized ambient light, how do we make sure that it doesn't fill any areas light shouldn’t reach and ground objects together? 

First, you need to generate ambient occlusion maps for some of the objects, such as inside the doghouse or the raccoon’s mouth, using your preferred DCC.

Object-based Baked Ambient Occlusion

Next, to glue elements together, we use a post-processing effect called Screen Space Ambient Occlusion (SSAO).

Screen Space Ambient Occlusion

We’ll go into greater detail about this effect in the post-processing section below.

Planning your reflections

(L) The Indirect Specular Lighting (reflections) part of our image; (R) Ambient and baked reflections combined

In real-time, your Indirect Specular Lighting, or reflections, comes from a complex layering of techniques. The ordering of layers cascades as follows: 

  1. Real-time planar reflections
  2. Real-time reflection probes
  3. Small baked reflection probes
  4. Larger reflection probes (yes, the smaller volumes have priority over larger ones) 
  5. Sky 
Example of a planar reflection effect

Reflection resolution and cache size are managed in the HDRenderPipelineAsset. Basically, planar reflections are mirrors. These are perfectly suited for flat surfaces, but they can also be used in more complex situations like the one I created to help connect the raccoon and the inflated hose. Be careful with these, though, as they can be expensive in terms of memory.

Real-time reflection probes can also be used strategically to show important details in the reflections and occlusions of light sources. In this case, a small real-time spherical probe was used to engulf the bowl, allowing us to showcase the inflating hose about to rupture. 

Baked reflection probes need to cover our set. You should take advantage of the size priority, the parallax correction offered by the projection volume and advanced blending mode. A good rule of thumb is to place your capture point at the average height at which your camera work is done. 

If you look closely, you will notice that some reflection probes are very specific to shadowed areas. This was done to manage undesired specular reflections from our strongly directional sky.

Baked reflection probe layout

Following the action in the Timeline

The Timeline is your sequencer and the central hub for everything animation-based. From there, you can animate most things, but when it was time to create our shot-by-shot lighting, we used activation tracks instead. I personally find it easier to simply manage groups dedicated to a shot rather than animating every component of a lighting rig shot by shot.

Lighting group activations follow the camera cuts.

A few tricks:

  • It is important to keep in mind that everything you wish to animate in the Timeline needs to be in the same scene as the Timeline itself – in our case, the 001_timeline scene. Any link pointing to another scene will be lost when you reopen your project or enter PlayMode. 
  • Set your Post-playback state to inactive, or you may end up with ghost light rigs when you’re rendering.
  • Converting an animation track into a clip track is an easy way to reschedule your animation when the editing or timing of a shot is modified.
  • More information on activation tracks can be found here.

Why working with Prefabs is awesome

Prefabs are great for grouping objects that need to be reused multiple times. A Prefab lets you modify the composition of your group, and all instances will follow automatically. Unity Prefab instances let you easily and selectively revert any derivation you made or apply it back to the original Prefab and propagate changes instantaneously. This system also gives you a framework that enables a seamless derivations workflow, either to tweak instances or quickly create and validate alternate variations. 

As of version 2018.2, Unity supports nested Prefabs. This setup is perfect for quickly iterating on a lighting rig. As an example, here’s the Lighting Prefab structure for the first animation beat. You can see the shot-by-shot structure that is driven by activation tracks. Since only lighting objects that exist during the shot displayed, any others are currently hidden.

In the lighting part of our scene Hierarchy, you can find:

  • Scene setting overrides
  • Objects that are used strictly to compose shadows for this shot
  • Density Volumes to control the atmospheric effects 
  • A real-time reflection probe for the bowl
  • Other Prefabs for the sun, fill and rim lights

Another benefit of working with Prefabs is that once an object has been instantiated in your scene, you don’t need to keep the scene checked out in source control to work on your lighting. This means that other members of your team can work on the scene.

It’s important to mention that certain properties cannot be derived in your Prefab and require a custom instance for any variation. This is the case for Scriptable Objects like Volume Profiles, Post-Processing Profiles, and Timelines.

Adding sunlight

An instance of the sun resides in each shot group. This allows us to do shot-by-shot minor tweaks to its orientation. We also used a tree cutout to carefully create interesting shadows.

High-quality shadows

By working in forward rendering, HDRP grants you access to post-percentage closer soft shadows (PCSS). This tool lets you simulate the lights’ penumbra.

In real-time, lights are not ray-traced, and shadows are deducted by a technique called shadow maps. Shadows are not free and need to be enabled for each light. HDRP's Shadow Atlas system enables you to have a large number of shadow-casting lights. We used the largest Atlas possible, but it’s important to know that while a large Atlas substantially improves quality, it’s also more expensive in terms of computing time because it contains more information. 

The system will dynamically rescale shadows if your Atlas ever gets full. Shadow resolution is defined per light. In our setup, my punctual lights are set to 1024 and my directional is 4x2048. Note that point lights are 6x1024 (to represent a cube), so they are very expensive.    

The directional light shadow benefits from a technique called Cascade Shadow Maps (CSM). To achieve optimal quality, you need to refine the distribution of cascades per shot using the HD Shadow Settings component overrides. You may also need to tweak the near clip plane in your Cinemachine virtual camera lens settings. This pushes the first cascade forward, a process that becomes more increasingly important alongside the length of the lens you are using.

Working with volumes

In HDRP, many rendering and post-processing properties are driven by volumes with associated “profiles.” Since Unity 2019.1, both of these sets of properties have been merged into the same volume component and are now organized in categories. The profile values are interpolated depending on the current camera position. If the camera is not in any volume, it will fall back on a global profile.

Each volume has a priority and weight value. Unfortunately, the values of the profile can't be animated, but the priority and weight values can be. This is exactly how we animated the post-processing for the electric shock effect.

Light layers

In Unity, you can assign an entity to specific light layers. We did this to manually place a catchlight to make the raccoon’s eyes look alive and interesting. This feature also lets me manage any undesired specular highlights coming from the many punctual lights I used for fill and rim lighting. We also used this functionality to prevent punctual lights from interfering with our water effects.

Adding a dedicated catchlight to the eyes

Controlling your direct specular lighting

One really interesting feature of lighting in HDRP is the ability to adjust the Max Smoothness of your lights. This technique can be used to simulate the dispersion of the specular light source’s intended size.

You can also specify if a light is Affect Specular or Affect Diffuse and test the effects of these settings in your content.

Mastering volumetric lighting

Creating depth using volumetric lighting effects

Atmospheric effects are an important tool in your arsenal when it’s time to create depth in your scenes. HDRP high-quality volumetrics (defined in the HDRP settings) provide you with a robust solution, and you can maximize the effect’s quality using the Volumetric Lighting Controller.

For the best-localized effect, we used Density Volumes in Sherman, once again placed shot by shot to maintain full control over their appearance.

In-Editor color grading and camera effects

Sherman before and after Color Grading.

Color Grading is now an essential part of any production process. You can achieve impressive results directly in Unity. Post-processing effects have been fully integrated into HDRP since 2019.1. Here, I’ll outline our workflow from Sherman, using the version referred to as the Post-Processing V2 Stack (which isn’t compatible with the Post-Processing V3 Stack integrated into HDRP 2019.1). Things have changed since then, but the concepts are all the same.

Post-processing effects work on volumes. Color Grading, Ambient Occlusion, Grain, and Filmic Motion Blur are defined in a master volume that resides in the lighting scene. Some of those values will be overridden by higher-priority volumes that are part of our per shot Timeline-activated setup.

Every camera-related effect – Bloom, Depth of Field, and Vignette – is defined in a custom profile created for every single Cinemachine Virtual Camera using the CinemachinePostProcessing extension. At certain moments, a Post-exposure override was also used since pre-exposition wasn’t yet available in Unity 2018.4.

Tonemapping

Since the entire render happens in HDR Linear space, it’s extremely important to define the color output. This project was set to use the ACES output (Academy Color Encoding System) from the beginning. ACES Tonemapping gives you highly contrasted images with very strong dark tones. To obtain our uplifting color palette, we needed to adjust our gamma. Most games that adopt this style of tonemapping do the same.

Trackballs

The Unity team is very proud of our new Trackballs tool. Trackballs controls are a familiar tool to any professional – they’re easy to manipulate, and we really love them.

Trackballs controls in the Color Grading settings

Grading curves

Even if only one can be displayed at once, many grading curves are available. We used Hue vs Sat, Sat vs Sat and Lum vs Sat.

Waveform Monitor

A Post Process Debug component was also added to our main camera. From there, you can activate a Waveform Monitor and Histogram. Waveform is a really powerful tool for analyzing the health of blacks and whites.

Ambient Occlusion

Ambient Occlusion was already mentioned, but let’s take a closer look at some settings. Intensity was set to 0.75, and in some shots, it was diminished in order to reduce noisy patterns, showing mostly in the trees’ foliage. We did not use any non-physically based rendering (PBR) options like the Color or Direct Lighting Strength. This effect is resolution-dependant, so this effect should be tweaked to fit your intended output resolution.

Depth of field

DOF is not only the byproduct of a fast lens; it’s an essential tool to drive the focus in your story. Images are moving fast, so people’s attention has to be carefully guided. 

Cinemachine has the wonderful option of tracking focus for you. The new version of this post-processing tool has been fully re-written and is no longer resolution-dependant. In the 2018 version of Unity, the VFX Graph transparencies render queue wasn’t an available feature. This means that for Sherman, there were certain shots that included our water effect where we had to avoid any shallow depth of field. The good news is that this VFX Graph feature was introduced in 2019.2.

Vignette

Vignette is a powerful effect, but one that can easily be overused. It mimics an artifact of filming through a lens, visually representing the loss of light in the corner of the frame. For realism, this effect should always be rounded.

Bloom

Bloom is a beautiful artifact created by strong light bleeding on your camera sensor and dirty lenses. You can simulate this effect in Unity, but, once again, this technique should be used with moderation.

Conclusion

Lighting for animation is complex and difficult to master. In this post, we covered almost everything a traditional lighting artist would need to know to light a real-time rendering production. I really hope you enjoyed reading it and feel inspired to try some of the tricks I’ve outlined here. 

I can’t see myself ever going back to “the good old days” of traditional lighting. Real-time animation is revolutionizing the craft and creating new worlds of opportunity. With more room for taking risks and experimenting without consequences, the results can only be greater.    

For more information about Sherman, including access to the full project, see Unity’s Film Solutions page. Get in touch to discuss how Unity and the Innovation Group can help bring your projects to life.

November 13, 2019 in Industry | 18 min. read

Is this article helpful for you?

Thank you for your feedback!