Search Unity

The Scriptable Render Pipeline (SRP), introduced in 2018.1 beta, is a way of configuring and performing rendering in Unity that is controlled from a C# script. Before writing a custom render pipeline it’s important to understand what exactly we mean when we say render pipeline.

What is a Render Pipeline

“Render Pipeline” is an umbrella term for a number of techniques used to get objects onto the screen. It encompasses, at a very high level:

  • Culling
  • Rendering Objects
  • Post processing

In addition to these high level concepts each responsibility can be broken down further depending on how you want to execute them. For example rendering objects could be performed using:

  • Multi-pass rendering
    • one pass per object per light
  • Single-pass
    • one pass per object
  • Deferred
    • Render surface properties to a g-buffer, perform screen space lighting

When writing a custom SRP, these are the kind of decisions that you need to make. Each technique has a number of tradeoffs that you should consider.

Demo Project

All the features discussed in this post are covered in a demo project located on GitHub

The Rendering entry point

When using SRP you need to define a class that controls rendering; this is the Render Pipeline you will be creating. The entry point is a call to “Render” which takes the render context (described below) and a list of cameras to render.

The Render Pipeline Context

SRP renders using the concept of delayed execution. As a user you build up a list of commands and then execute them. The object that you use to build up these commands is called the ‘ScriptableRenderContext’. When you have populated the context with operations, then you can call ‘Submit’ to submit all the queued up draw calls.

An example of this is clearing a render target using a command buffer that is executed by the render context:

A not very exciting example of a render pipeline :)

Here’s a complete render pipeline that simply clears the screen.

Culling

Culling is the process of figuring out what to render on the the screen.

In Unity Culling encompasses:

  • Frustum culling: Calculating the objects that exist between the camera near and far plane.
  • Occlusion culling: Calculating which objects are hidden behind other objects and excluding them from rendering. For more information see the Occlusion Culling docs.

When rendering starts, the first thing that needs to be calculated is what to render. This involves taking the camera and performing a cull operation from the perspective of the camera. The cull operation returns a list of objects and lights that are valid to render for the camera. These object are then used later in the render pipeline.

Culling in SRP

In SRP, you generally perform object rendering from the perspective of a Camera. This is the same camera object that Unity uses for built-in rendering. SRP provides a number of API’s to begin culling with. Generally the flow looks as follows:

The cull results that get populated can now be used to perform rendering.

Drawing

Now that we have a set of cull results, we can render them to the screen.

But there are so many ways that things can be configured, so a number of decisions need to be made up front. Many of these decisions will be driven by:

  • The hardware you are targeting the render pipeline to
  • The specific look and feel you wish to achieve
  • The type of project you are making

For example, think of a mobile 2D sidescroller game vs a PC high end first person game. These games have vastly different constraints so will have vastly different render pipelines. Some concrete examples of real decisions that may be made:

  • HDR vs LDR
  • Linear vs Gamma
  • MSAA vs Post Process AA
  • PBR Materials vs Simple Materials
  • Lighting vs No Lighting
  • Lighting Technique
  • Shadowing Technique

Making these decisions when writing a render pipeline will help you determine many of the constraints that are placed when authoring it.

For now, we’re going to demonstrate a simple renderer with no lights that can render some of the objects opaque.

Filtering: Render Buckets and Layers

Generally, when rendering object has a specific classification, they are opaque, transparent, sub surface, or any number of other categories. Unity uses a concept of queues for representing when an object should be rendered, these queues form buckets that objects will be placed into (sourced from the material on the object). When rendering is called from SRP, you specify which range of buckets to use.

In addition to buckets, standard Unity layers can also be used for filtering.

This provides the ability for additional filtering when drawing objects via SRP.

Draw Settings: How things should be drawn

Using filtering and culling determines what should be rendered, but then we need to determine how it should be rendered. SRP provides a variety of options to configure how your objects that pass filtering should be rendered. The structure used to configure this data is the ‘DrawRenderSettings’ structure. This structure allows for a number of things to be configured:

  • Sorting – The order in which objects should be rendered, examples include back to front and front to back.
  • Per Renderer flags – What ‘built in’ settings should be passed from Unity to the shader, this includes per object light probes, per object light maps, and similar.
  • Rendering flags – What algorithm should be used for batching, instancing vs non-instancing.
  • Shader Pass – Which shader pass should be used for the current draw call.

Drawing

Now we have the three things we need to issue a draw call:

  • Cull results
  • Filtering rules
  • Drawing rules

We can issue a draw call! Like all things in SRP, a draw call is issued as a call into the context. In SRP you normally don’t render individual meshes, instead you issue a call that renders a large number of them in one go. This reduces script execution overhead as well as allows fast jobified execution on the CPU.

To issue a draw call we combine the functions that we have been building up.

This will draw the objects into the currently bound render target. You can use a command buffer to switch the render target if you so wish.

A renderer that renders opaque objects can be found here:

https://github.com/stramit/SRPBlog/blob/master/SRP-Demo/Assets/SRP-Demo/2-OpaqueAssetPipe/OpaqueAssetPipe.cs

This example can be further extended to add transparent rendering:

https://github.com/stramit/SRPBlog/blob/master/SRP-Demo/Assets/SRP-Demo/3-TransparentAssetPipe/TransparentAssetPipe.cs

The important thing to note here is that when rendering transparent the rendering order is changed to back to front.

We hope that this post will help you get started writing your own custom SRP. Download the 2018.1 beta to get started and let us know what you think on this forum thread!

Ya no se aceptan más comentarios.

  1. How can we get access to the demos showed in the video above?

  2. Is it possible somehow to get the data from a RT in the CPU or copy it directly from VRAM to RAM?
    I need to access the content in the CPU per frame.

  3. Great job. A good render, I learned a lot of useful things.
    192.168.l.l

  4. Objects with Standard Shader are not rendering in Opaque and Transparent Asset Pipes, do I need a special material or other setting up to perform?

  5. Looks really good so far, now we just need some intermediate pipelines to learn from:)
    While the Basic is very easy to understand and get along, the HD seems like a complete framework of it’s own and too much to grasp without very thorough study…
    Are there any intermediate pipelines out there or used internally, like Deferred only with basic feature support (shadows, lightmapping, …), to learn from?

  6. Thanks for the much needed introduction!

    I use these extensively:

    Camera.AddCommandBuffer( CameraEvent evt, CommandBuffer buffer )
    Light.AddCommandBuffer( LightEvent evt, CommandBuffer buffer )

    How do they work in SRP? How are the buffers picked up and how are CameraEvent and LightEvent handled?

  7. I suspect that Unity actually has two goals here:

    (1) Make a scriptable render pipeline, which brings great opportunities, and is pretty much required for AAA development studios to consider the platform. Oh, and it’s technically awesome.
    (2) Make two new “standard” render pipelines targeting core markets of mass-market mobile and PC/Console gamedev.

    The fact that the business goal of (2) can be implemented in terms of the technical goal (1) is a nice bonus.

    The existing render pipelines are not feature competitive with modern AAA titles, and they’re too fill rate intensive to use for a mass-market mobile title. So Unity has a market need for this change to make render pipelines for those targets because that’s what their customers want. Making the pipeline scriptable at the same time is a bonus, but opens up a world of divergence that could make code-based assetstore assets even more impossible to rely on than they already are.

    The end results of this change will see a massively improved Unity and are undoubtedly the right thing to do. But I’d caution that this is a hard change to make (it took EA and the Frostbite team a long time and they had the advantages of full source code and everyone being part of the same company…) I am greatly concerned about whether the wider Unity (including their customers with existing games, and the asset store vendors) will be able to support each other through this transition.

    I struggle to look at the recent history of “open” Unity systems (Navmesh Components, UNET HLAPI, and Unity.UI’s definition of “open source” is interesting…) and believe that that model will transition well to supporting an “open” renderer. But of course, maybe that is not a required goal. Maybe the goal is really the two new LW and HD pipelines, plus the self-documenting benefits of having the C# source act as the documentation, and that alone does seem a great step forward.

    A Unity world where render algorithms are highly customizable but where artist workflow X only works with pipeline Y assuming you’re using asset store customizations Z but not W (and assets built assuming the previous assumptions) may not drive higher quality visuals overall.

    1. I am afraid until people “play” “games” on their phones this is the only way to go forward. Your scenario is likely, but my point is that it is unavoidable.

    2. As an asset publisher that has been involved in the ongoing talks in the forums about this very topic…there are some reasons to be worried indeed.

      SRPs are not cross compatible and asset vendors will need to give explicit support to each one of them (both the standard unity ones and any popular custom ones) if we want users to be able to use our assets.
      We have been told this won’t be mandatory, which means that not every developer will port their assets to every SRP and thus some assets won’t be available in some SRPs.

      Furthermore, if you write your very own SRP you will be left without any compatible assets unless you take the extra care to write all the handlers, importers and compatibility fixes by yourself or find a very good asset vendor willing to spend his/her time helping you port their assets to your own SRP.

      SRPs are a very good idea indeed, but it has been a worry so far (due to this feature being in a VERY early state and with almost no information available to asset creators ) that they will end up splitting most of the Asset Store community of both customers and vendors. Our main worry is the compatibility and support nightmare that it will unleash over all asset creators by giving them at least 3 times more work to support their assets as well as the added difficulties for new users who may end up with little to no assets compatible with the SRP of their choice or demanding asset providers to support a never ending stream of new custom SRPs without understanding the technical limitations behind each and every port.

      But it is indeed an interesting topic and very probably a needed step, just a very risky one. We’ll need to follow it to see how it goes

  8. Very nice!

  9. Just out of curiosity: Does SRP allow to share the culling result of one camera with others? Think of mirrors for a vehicle. In a car you have typically 3 mirrors looking into the same direction. All three render from a slightly different position and different parts of the rendered result are obstructed by the car geometry itself but in fact all three cameras do the same culling operations. If it would be possible to use the culling result of one camera for the other two aswell could save some performance in vehicle simulations imho.
    Despite of this maybe stupid question this stuff sounds exciting :-)

    1. Seems you have two options, either use the culling result from the main camera for all three or build up your own matrices and populate the ScriptableCullingParameters manually:) Do need some modifications to the SRP though, identifying the three cameras you want to share your culling with and handle that in the camera render loop.

      1. Thanks for the clarification :-)

  10. As a long time mobile dev, with minimal experience in how graphics engines actually work, this was an extremely helpful article.

    I kept reading about the “Scriptable Rendering Pipeline” and thought: “That sounds great! …I think? Too bad it probably won’t be useful to us.”

    As it turns out, I think we have an actual use for it in our current project.

  11. Great article!

  12. Well done unity Team

  13. this image could be rendered in unity 2.6, use the translucent mushroom that this French guy showed on twitter

  14. Excellent info. While I love unity managing all this for me in most cases, I can think of a few projects where I would love to customize a SRP to make novel visual styles. I look forward to playing with all of this soon!

  15. Thanks for the information, Tim. I appreciate these docs (hoping these blogs will make their way to the actual docs too).

    Also, can people not be rude and splat their wishlists on every blog posting. You guys realise that will never work, right?

  16. Dead excited for this. Well done unity team!

    You usually find me on the forums complaining about stuff that the Unity team has promised for a while but not delivered on (with good reason, there are a few things missing for a while that hold me and many off from subscribing!).

    So I am very very happy to see that for once you are prioritizing the importance of the engines fundamentals in a release, instead of adding lots of extra but crappy services and unfinished systems.

    BUT PLEASE:

    FINISH NESTED PREFABS

    UPDATE TERRAIN SYSTEM

    ADD SMARTSPRITE

    FIX AND DOCUMENT PROPERLY TIMELINE

    These 3 things have really been too long waited for now, its sort of a necessity for an engine with over 50% of all developers in the world using it to have an up to date terrain and spline based 2D terrain system.

    Keep up the good work!

    1. I am with you, man. I am really mad about core engine features being outdated and literally crappy. SRP looks like a real step forward, and I hope they will fix other systems in 2018 as well. Because Unity should be on a high level not just partially.

    2. I will join you too :) Pls, don’t forget about 2d animation (like builtIn anima2d).

      1. But Anima2D is a part of Unity now, like Cinemachine or TextMeshPro…

    3. PLEASE STOP