Search Unity

In Unity 5 we’ve been adding many user-visible graphics features (new Standard shader, realtime global illumination, reflection probes, new lightmapping workflow and so on), but we’ve also worked on rendering internals. Besides typical things like “optimizing it” (e.g. multithreaded light culling) and “making it more consistent” (e.g. more consistently between Linear & Gamma color spaces), we’ve also looked at how to make it more extensible.

Internally and within the beta testing group we’ve discussed various approaches. A lot of ideas were thrown around: more script callbacks, assembling small “here’s a list of things to do” buffers, ability to create complete rendering pipelines from scratch, some sort of visual tree/graph rendering pipeline construction tools and so on. For Unity 5, we settled on ability to create “list of things to do” buffers, which we dubbed “Command Buffers“.

A command buffer in graphics is a low-level list of commands to execute. For example, 3D rendering APIs like Direct3D or OpenGL typically end up constructing a command buffer that is then executed by the GPU. Unity’s multi-threaded renderer also constructs a command buffer between a calling thread and the “worker thread” that submits commands to the rendering API.

In our case the idea is very similar, but the “commands” are somewhat higher level. Instead of containing things like “set internal GPU register X to value Y”, the commands are “Draw this mesh with that material” and so on.

From your scripts, you can create command buffers and add rendering commands to them (“set render target, draw mesh, …”). Then these command buffers can be set to execute at various points during camera rendering.

For example, you could render some additional objects into deferred shading G-buffer after all regular objects are done. Or render some clouds immediately after skybox is drawn, but before anything else. Or render custom lights (volume lights, negative lights etc.) into deferred shading light buffer after all regular lights are done. And so on; we think there are a lot of interesting ways to use them.

Take a look at CommandBuffer and CameraEvent pages in the scripting API documentation.

Pictures or it did not happen!

Ok, ok.

For example, we could do blurry refractions:


After opaque objects and skybox is rendered, current image is copied into a temporary render target, blurred and set up as a global shader property. Shader on the glass object then samples that blurred image, with UV coordinates offset based on a normal map to simulate refraction. This is similar to what shader GrabPass does does, except you can do more custom things (in this case, blurring).

Another example use case: custom deferred lights. Here are sphere-shaped and tube-shaped lights:


After regular deferred shading light pass is done, a sphere is drawn for each custom light, with a shader that computes illumination and adds it to the lighting buffer.

Yet another example: deferred decals.


The idea is: after G-buffer is done, draw each “shape” of the decal (a box) and modify the G-buffer contents. This is very similar to how lights are done in deferred shading, except instead of accumulating the lighting we modify the G-buffer textures.

Each decal is implemented as a box here, and affects any geometry inside the box volume.

Actually, here’s a small Unity (5.0 beta 22) project folder that demonstrates everything above:

You can see that all the cases above aren’t even complex to implement – the scripts are about hundred lines of code.

I think this is exciting. Can’t wait to see what you all do with it!

51 replies on “Extending Unity 5 rendering pipeline: Command Buffers”

Hi Aras, When I switch the project to iOS platform, I got “Tiled GPU perf. warning”. I can’t find any “DiscardContents” api for the CommandBuffer’s Temporary RT. Any idea?
BTW, The first “blurry refractions” scene get 12fps on the iPhone5. Is that normal ?

Hello, I’m digging into this direction for a decal system, but I’m facing an issue:
In your current demo project, if you lower your light’s shadow strength to less than 1.0f,
let’s say 0.6 for the sun, and 0.4 for the other one,

Toggling the master script “DeferredDecalRenderer” reveals that shadows are lighter enabled than disabled.
Do you know why ? Is there a way to maintain the shadow intensity with deferred decals ?

Thanks a lot for this great opportunity to learn CommandBuffers by the way !

[…] Command buffers can be executed from a bunch of places during rendering, e.g. immediately after deferred G-buffer; or right after skybox; etc. See this blog post:  […]

[…] Command buffers can be executed from a bunch of places during rendering, e.g. immediately after deferred G-buffer; or right after skybox; etc. See this blog post. […]

I tried the Area Light Demo and it is really looking nice. Much to my regret, these Lights do not contribute to Enlighten GI. Is there anyway to do it? Years age I have read the Enlighten Paper where they say that Direct Lighting Input is handled by the CPU. Is this still true? Is there any API support for this? I would like to have dynamic Area Lights that contribute to Enlighten.

[…] being investigated—was still raging Saturday afternoon and that it could take a week before the pipeline […]

Guys please, quit asking for this tiny feature or that feature thingy. What we need is a customizable and flexible rendering pipeline and that’s what Unity is doing.
Great post Aras, keep making unity the most flexible engine ever! :D

Hi there, we’ve read that article with great interest! CommandBuffers will certainly simplify a couple of things when rendering custom lights in a deferred fashion. The ability to pass down parameters and define custom proxy geometry is great!

However, right now we don’t see, how a similar approach could also be applied to forward rendering. In Unity 5.0 is there any way to define custom parameters for specific lights? Is there something we are missing?

Thanks a lot!

Thanks Aras and team. Fantastic addition to unity’s functionality. Having just implemented a limited deferred decal system in 4.6 I can really appreciate the additional scope this will bring to 5. Brings a happy tear to my eye :)

Thanks you Aras and team! Having just implemented a limited deferred decal system in 4.6 I can really appreciate this level of access to the render pipeline. So much scope has just been unveiled Hugs :)

Awesome demo! Would there be an easy way to blit the result of the decal projection onto the surface texture as an optimisation?


We threw in deferred decals. Love them. But could you give some clues how best to optimise them when they are not seen? Is it worth optimising them when they aren’t in view? Any suggestions for optimising (we have an open world game and fill rate can potentially be an issue on console).

So any suggestions on how we can approach it with existing example will be super helpful, thanks!

Deferred decals? Finally! Because I’m sick with using third-party solutions, own scripts using projector. I’m looking forward for the list of features available for Indie and Pro versions. It would be cool to include decals into Free version. Thanks for Your hard work, Unity T.!

Yesterday I have been thinking about a way to have decals in deferred rendering mode and wondered if Unity will make us able to modify the deferred rendering process. Now I am happy!
But I should have read this earlier to safe some time brainstorming about stuff which you already implemented :)

Love the blur effect. Tried to hack it onto a new UI panel to make an iOS style blurring layer but it’s not blurring other UI objects. Anyone here wanna help me? :D

“more script callbacks”

How about you start using .NET events? Like… Everywhere. It’s 2015, and Unity still doesn’t have a “On Selection Changed” event!

Awesome works really. I strongly believe that decal tools and area light should be built in for both forward and deferred rendering. Thanks unity team.

totally agree with you. It should really be implemented for forward render. Otherwise, support MSAA on deferred.

You may have noticed that these decals are literally called “deferred decals”, so they don’t work in forward rendering, by definition. Their big benefit is that they’re really really cheap (just like lights are cheap in deferred).

If you were to implement decals for forward rendering, you would have to do it completely differently and they wouldn’t be so cheap that you could just use them like you can use deferred decals.

*Mind Blown* holy crap. i heard command buffers where awesome but HOLY CRAP! I wonder if you could do temporal re-projection using this? save the last several frames and their velocity buffers and re-project them back into a higher res image? obviously you would need a velocity buffer which would be the hard part. but still, those exist. *drool*. so much potential… i wonder if these exist on shadow lights? hmmm… this sounds so awesome. must go cuddle the documentation. alot!

keep up the awesome guys!

This is awesome! I’m especially glad you included the sample project. I can already think of a ton of things I could use this for…

Very interesting but only for PRO. Anyway im happy that unity evovle in featureset.

Just curious: you mentioned that there were several ideas thrown during discussion, all rejected except one, but there is at least one another idea saved for “someday” (like ability to write custom rendering pipeline from scratch)?

The deferred decals technique is exactly how we did the 2D on 3D stuff in Sideway : NewYork! Of course that was in Gamebrio and we had to write our own deferred rendering pipeline to do it. But it made the decals quite cheap to render!


Great work, but I need to know a very simple important information. Has the management of dynamic meshes been optimised? Currently modifying vertex buffers and index buffers in real time is so slow (because of the unity implementation) that we have been forced to find a different solution uploading in real time textures and using shaders instead. I am still sure that, in our case, using dynamic index buffers properly would be a better solution.

Okay, continuing our conversation from Twitter about using command buffers for forward rendered area lights. Unfortunately I just realized that it ultimately wouldn’t work because it sounds like CameraEvent.AfterForwardOpaque and CameraEvent.AfterForwardAlpha both happen after all the objects finish rendering. And while I could probably make it work for the opaque passes, it would be totally broken once alpha stuff enters the picture.

Not to mention, I forgot about all that other nasty stuff needed to make good lighting, like lightmapping integration, shadow options, etc. So this idea probably isn’t going anywhere. :(

Oh man those sample are awesome :). Any plans to made all of those became a native solution in future Unity5?

Most excellent. I hope when U5 is released that these example projects will remain somewhere easy to find.

Does this mean Unity5 can be used as a rendering backend for C# applications by bypassing its scene structure? That’s pretty huge.

This pleases me greatly, hurray for extensible-ness that lets you casually add several killer in demand features as an example, leverage indeed

Comments are closed.