Search Unity

How to set up Unity’s High Definition Render Pipeline for high-end visualizations

January 9, 2020 in Industry | 17 min. read
Share

Is this article helpful for you?

Thank you for your feedback!

Prior to Unite Copenhagen in September 2019, Unity collaborated with Lexus and its agency Team One to showcase a real-time car configurator and promote Unity as a photographic tool with real-world camera controls, using the High Definition Render Pipeline (HDRP). The demo featured a high-detail car model from Lexus in different trims, a fully-modeled 3D environment of the Amalienborg Palace in Copenhagen, and 3 distinct lighting conditions.

Unity’s Spotlight team in the UK was in charge of the quality of the environment and lighting, to ensure the car appeared in the best possible light, literally.

This article will explain in more detail how I set up HDRP in the context of this automotive scenario, and how physically-correct lighting and camera setups can help you to reach an impressive level of visual fidelity for high-end visualizations made with Unity.

Software and data

The demo used an early build of Unity 2019.3 in conjunction with HDRP 7.1.x. The car materials rely on Unity’s extensive Library of Measured Materials which offers, among many others, a realistic car paint shader with metal flakes. The environment uses the standard HDRP Lit shader. Important mention: for this demo, HDRP’s Ray-Tracing capabilities were not utilized because they were still in early preview at the time.

The extremely detailed pre-tessellated car model was provided by the manufacturer. The outsourcing studio elite3d entirely modeled and textured the Amalienborg Palace to a high standard, using software such as Maya (primarily for asset exports), 3ds Max (hard surface modeling), Zbrush (sculpting on top of scanned data provided by Domeble), as well as Substance tools (material creation and painting), and Marmoset Toolbag (baking). Finally, Unity’s Spotlight team created the lighting for the environment, applied all the materials to the hundreds of car parts, and ensured the quality of the visuals was optimal.

Unlike many other automotive configurators, we did not use a static dome or a backplate for the environment: instead, the entire courtyard is modeled in 3D. The car can thus be placed wherever needed and the lighting and time of day can be changed at will. Indeed, one of the major drawbacks of static HDRIs, domes or backplates is that their lighting and perspective are fixed due to the way they are intrinsically captured: with a fixed perspective and lighting. For this reason, they offer little flexibility and impose very strong restrictions on the lighting, and the ability to move the subject and the camera away from the original capture point. As you can see in the following animation, a fully modeled environment is significantly more flexible.

HDRP settings

If you aren’t already familiar with the High Definition Render Pipeline, please have a look at the official documentation to get started. Pay attention particularly to the Volume Framework, the main way to control HDRP’s effects.

I also highly recommend you have a look at the HDRP Wizard under “Window > Rendering Pipeline”, because it can automatically fix incorrect settings in your HDRP project, notably the Color Space, as it can radically reduce the image quality if you were to use Gamma color space rather than Linear.

Furthermore, using the settings detailed in this article will most likely result in poor framerate on any platforms but high-end PCs when dealing with such a demanding scene. Thus, in order to provide a comfortable working experience in Unity at 1440p or 4K resolution under 100 milliseconds per frame, a GeForce GTX 1080 TI or better is recommended.

Rendering settings

First of all, head to HDRP’s quality settings under “Edit > Project Settings > Quality” and enable the following settings:

  1. “R16G16B16A16”: The fp16 precision can reduce unwanted banding on light and fog gradients stretching over a large screen-space distance, for a performance cost, i.e. increased memory consumption and bandwidth. Additionally, I recommend you activate 8-bit Dithering in the Camera’s inspector to minimize banding further.
  2. “Forward Only”: Forward rendering generally offers higher quality compared to Deferred rendering. Furthermore, at the time of writing this article, it is the only mode that offers a convenient way to quickly enable a finer shadow filtering technique that we will cover later in this article.
  3. “MSAA 2/4/8x" multiplier: it enables support for multi-sampling anti-aliasing in your project. The higher the multiplier, the smoother the image and the higher the cost.

MSAA must also be manually enabled in the Frame Settings of every GameObject type able to render the scene, i.e. cameras and reflection probes. In the Default Frame Settings under “Edit > Project Settings > HDRP Default Settings”, toggle “MSAA within Forward” globally for cameras and the different types of reflections.

MSAA for reflections is particularly important when dealing with highly reflective materials such as chrome and car paint: any visible aliasing will harm the photo-realism of the rendered frames. Furthermore, cameras and reflection probes can use their own custom Frame Settings in their respective inspectors: for example, you can decide to enable the use of MSAA for the Baked Reflection Probes, disable it for the real-time Reflections Probes to reduce performance impact, and rely on a faster type of anti-aliasing for the camera.

Temporal anti-aliasing (TAA)

TAA is particularly effective in dealing with cases where MSAA is unable to remove jaggies that are not related to geometric aliasing, such as lighting (specular), texture or transparency aliasing. TAA can in certain cases stand on its own as the only form of anti-aliasing for the camera, especially given its quality and its relatively low cost compared to MSAA 4x or 8x. Nevertheless, due to its temporal nature, i.e. the reliance on previous frames and motion vectors, TAA can introduce unwanted ghosting when dealing with fast-moving cameras or subjects, and can soften the image slightly. Thankfully, HDRP’s Cameras provide a TAA Sharpness control.

To force TAA in the Scene View as well, toggle the feature in “Edit > Preferences > HD Render Pipeline > Scene View Anti-Aliasing”, and then activate “Animated Materials” in the Scene View toolbar.

Texture filtering

Another very important parameter to consider is the filtering of the textures. The best possible filtering, i.e. Anisotropic Filtering, can be forced globally via “Project Settings > Quality > Anisotropic Textures”. This will ensure surfaces nearly parallel to the view direction, typically ground and road materials, remain sharp at medium range and beyond, for a negligible performance impact on modern high-end GPUs.

Lighting settings

In HDRP’s quality settings, in the Lighting section, I suggest you enable the following options, either to maximize the quality of the lighting rendering or to facilitate your workflow.

Light layers

The Light Layers allow you to control which lights and reflection probes affect certain objects in your scene. In other words, they can radically simplify the setup when selectively applying lighting and reflections to the parts of a model. For instance, I can use them to get a correct level of reflections on the wheels and under the arches of the vehicle while preventing the main bodywork from receiving unwanted dark reflections, by following these steps:

  1. Place a Reflection Probe under each inner fender and tune its influence volume
  2. Assign these 4 probes to “Light Layer 1”, in the probes’ inspector.
  3. Add the “Light Layer 1” to the list of Rendering Layers that can affect the wheels and inner fenders

Result: the wheels and inner fenders receive correct, darker, reflections, while the main bodywork remains unaffected and correctly lit by another external reflection probe.

I use this workflow for many other parts of the car to:

  • Prevent the interior Lights and Reflection Probes from leaking on the exterior bodywork, and vice versa
  • Restrict the Planar Reflection Probes to the mirrors geometries
  • Produce believable reflections for the headlight and taillights reflectors and lamp casings, without affecting the bodywork 
  • Assign Reflection Probes to certain iconic chrome parts, such as the front grill

No more (or at least less) fiddling with the bounds and Blend Distances of Reflection Probes when trying to encompass complex shapes! The only minor drawback is that your model has to be separated into several GameObjects to take advantage of this feature.

Shadow filtering

Another critical parameter to achieve near-photorealism is the quality of the shadows and the simulation of the penumbra; thankfully, HDRP excels in this area. In HDRP’s quality settings, enable “High” Filtering Quality and increase the maximum shadow resolution to fit your requirements for the different types of light you want to use: Directional, Punctual (Point and Spot), and Area lights.

There is a common misconception in real-time graphics where sharper shadows are often wrongly associated with higher quality, because higher resolution shadowmaps produce relatively sharper shadows when no additional blurring is applied. However, in reality, the further a shadow extends from its shadow caster, the blurrier the shadow. Generally, shadows are a lot softer than anticipated by most. HDRP is able to simulate shadow blurriness to a high degree of fidelity: distant shadows are highly blurred, whereas shadows close to their source remain sharp.

In the example below, see how the central Equestrian Statue appears realistically blurred on the facade of the Palace as the shadow of the statue travels over a long distance. Meanwhile, the shadow originating from the car remains fairly sharp, yet gradually becomes blurrier as the distance from the car increases.

The quality of the filtering is controlled in the Shadow section of the Light inspector. Furthermore, HDRP allows you to simulate larger emissive Shapes, thanks to the Angular Diameter (Directional Light), or the Radius (Point & Spot Light). These controls affect both the softness of the shadows and specular highlights in a physically-based fashion: the larger the emissive source, the softer its shadows and specular highlights.

If you notice stepping artifacts (shadow acne), increase the Slope-Scale Depth Bias: more bias will be applied to polygons facing away from the Light. Thus, unwanted shadows will visibly “sink” under their surface. Pushing the Near Plane and reducing the Range of the Light can also offer more shadowing precision, although it is not always a practical solution.

Contact shadows

Contact shadows are particularly effective at fixing the lack of resolution in the distant cascades of the directional shadowmap. They can dramatically improve the quality of the self-shadowing for micro geometric details at any distance from the camera, for both Directional and Punctual Lights. The facades of the Palace noticeably benefit from this feature: fine ornaments, ledges, sentry boxes and windows cast believable shadows.

However, like any screenspace technique, offscreen geometries, transparent materials, and objects hidden behind other opaque objects will not produce any contact shadows, as those objects do not appear in the screen-space depth buffer. This can result in “holes” in the contact shadows, as well as a lack of shadowing on the very edge of the frame. In any case, the image quality boost provided by this effect, especially at medium and long-range, usually alleviates those drawbacks.

Lighting setup

First of all, I highly recommend you activate ACES tonemapping to ensure a filmic light response from the get-go. Obviously, you may still color grade the final image at a later stage, preferably after your lighting pass, with some of these Volume Overrides:

The later is especially important when using physically-based color temperatures for the sun, moon and artificial lighting, to reproduce the color adaptation in the human brain, or the automatic or manual white balance of a camera. For example, uncorrected images with incandescent lighting around 3000 Kelvin will have a very strong orange hue. White balance can also be used artistically in many situations; nonetheless, color reproduction in high-end visualizations must be taken very seriously, especially when it comes to car paints.

Environment lighting

Due to the dynamic nature of the demo, with the ability to change the time of day as well as the position of the car, a single overcast bake (Indirect Only) was set up, using an overcast HDRI sky with a multiplier of 10 000 lux. This provided a sufficient amount of occlusion in the concave parts of the Amalienborg environment. 

Thereafter, I set up an Indirect Light Controller to modulate the intensity of the baked indirect lighting at ease, especially for low light conditions. This allows me to create a fully physically-based lighting setup, with the Directional Light Intensity ranging from under 1 lux (moonlight) to 100 000 lux (intense sunlight). 

Finally, Volumetric Fog is added to provide a better sense of depth: every light in the environment can interact with the fog and generate gorgeous volumetric effects, such as crepuscular rays. The street lamps rely on the cookie baking technique I detailed in my previous Unity Expert Guide, to ensure flawless self-shadowing for emissive sources.

Thanks to the versatility of Unity’s GPU Progressive Lightmapper (PLM) and the recent addition of machine-learning denoising techniques from Nvidia and Intel, the entire courtyard can be baked under 30 seconds with a GeForce RTX 2080 TI. The following settings are sufficient to create a soft sky-occlusion bake for the entire courtyard:

  • Resolution of 5 texels per meter
  • 2 bounces
  • No Ambient Occlusion
  • Nvidia Optix AI-Accelerated Denoiser

Ambient occlusion

Another important effect is Ambient Occlusion (AO), either using a screen-space or a ray-tracing technique: it simulates the micro-occlusion occurring in concave parts of the scene. If you did not bake the AO offline with an external solution, these two real-time techniques can greatly refine the ambient lighting quality, especially in the interior of the car, and they can be particularly effective at reducing reflection leaking. Be aware that using unnecessarily strong Intensities can produce a cartoony look, due to the increased local contrast and haloing effect.

If you rely on the real-time screen-space Ambient Occlusion, handling the darkening under the car still requires a dedicated shadow plane to ensure optimal results, because the former cannot ensure satisfactory results at close range when parts of the undercarriage are offscreen. The occlusion can be baked using a DCC package. The resulting texture can then be applied in Unity using two different methods:

  1. A plane using an HDRP Unlit shader 
  2. A Decal Projector using an HDRP Decal shader

The obvious advantage of a decal is that it will be correctly projected onto uneven grounds, even when using advanced techniques such as displacement mapping which offsets pixels downwards; whereas an unlit plane might appear as floating when pixel-peeping. Setting up a Reflection Probe underneath the car will also prevent potential unwanted reflection leaking onto the road surface and the undercarriage.

Camera setup

HDRP’s Physical Camera controls simulate many key characteristics of a lens with the following parameters, among others:

  • Focal Length
  • Aperture
  • Sensitivity (ISO)
  • Shutter Speed
  • Blade Count
  • Anamorphism
  • Barrel distortion

These properties can affect both the Depth of Field (DoF) and the Exposure if you so choose. One of the advantages of HDRP is that you can easily decouple the Exposure from the Physical Camera settings, and you can, therefore, have more freedom when it comes to lighting and Exposure, while still maintaining a believable DoF.

HDRP also provides a Manual Focus mode for the DoF using non-physically-based Near and Far Blur distances. Nevertheless, I highly recommend using the Physical Camera mode instead and mimicking real-world automotive photography. In doubt, sticking to traditional cinematographic conventions and shot composition will avoid raising eyebrows. For instance, the use of an ultra-wide lens should be restricted to very specific scenarios, such as action shots, because they tend to introduce unwanted perspective distortions when shooting a subject at close range. And an unnecessarily shallow Depth of Field can negatively impact the sense of scale and look unrealistic, or video-gamey. Additionally, you can link the Camera’s Field of View to its Focal Length and prevent unrealistic DoF scenarios caused by using both a very wide field of view and a very high focal length simultaneously, knowing that those 2 parameters are inversely proportional.

There are many more camera effects available in HDRP: the new physically-based Bloom and the Panini Projection deserve an honorable mention. As always, all these effects require subtlety and artistic taste, especially when simulating generally unwanted lens defects, such as Lens Distortion, Chromatic Aberration, and Vignette.

There is no doubt these camera features can take time to digest if you are not familiar with the Art of Photography. Nonetheless, learning about the technical aspects of cameras and cinematography will enable you to create better visuals and more interesting compositions. Even if you don’t have an artistic mindset, you will be able at least to replicate the look of existing marketing materials more easily.

Transparency

One of the areas where real-time 3D struggles particularly is the correct rendition of transparencies, because such materials are usually treated in a different shading pass and rarely carry their own depth information. This means sorting transparent surfaces can be tricky, and transparent pixels typically inherit the depth of opaque objects located behind them. This obviously becomes troublesome when combined with Depth of Field, as the transparent objects that should normally be in focus become as blurry as the defocused background. 

Thankfully, HDRP has a few tricks up its sleeve: in the inspector of a material (for the HDRP Lit shader), you can toggle several Surface Options, such as “Render Back then Front” to prevent common sorting artifacts, and “Transparent Depth Postpass” to avoid graphical bugs related to post-processing effects. Unity even allows you to manually sort objects!

The following worst-case scenario features a shallow depth of field with several layers of glass in the taillights and headlights. Without Transparent Depth Postpass, parts of these lamps inherit the depth of the building and pavement, and therefore, they receive a similar level of blurriness as these background objects. As the blur is very strong, parts of the glass materials nearly disappear into the background. Thankfully, the Transparent Depth Postpass or the more radical Write Depth can solve this issue.

Nevertheless, one nontrivial rendering problem persists: the building and the pavement, seen through the taillight and headlight respectively, will remain in perfect focus, although they clearly belong to the background and thus should be blurred as such. Sadly, there is no elegant solution to this problem in the realm of game engines, and this issue isn’t limited to Unity only: this is one of the limitations of any rendering pipeline that relies on a single depth buffer.

Final words

HDRP is a very flexible rendering pipeline, able to scale from mainstream consoles to very high-end workstations. However, this flexibility comes at the price of a daunting number of rendering features to toggle and parameters to tune, in order to satisfy your project’s requirements. Hopefully, after reading my expert guide, you should have a better understanding of the capabilities of HDRP for your high-end visualizations, and the setup required to dramatically raise their visual quality for Automotive, AEC, Film, and even next-generation games.

Finally, thanks to the ongoing improvements to HDRP’s Ray-Tracing feature set, the gap between real-time and offline rendering is progressively narrowing: many traditional screen-space and image-based techniques might soon be a thing of the past, at least for high-end real-time applications. Nevertheless, there is no doubt these “antique” techniques, when pushed to their limits, can still easily fool the main target of these visualizations, i.e. the average consumer.

January 9, 2020 in Industry | 17 min. read

Is this article helpful for you?

Thank you for your feedback!

Related Posts