Search Unity

Extending Unity 5 rendering pipeline: Command buffers

February 6, 2015 in Engine & platform | 3 min. read
Share

Is this article helpful for you?

Thank you for your feedback!

In Unity 5 we've been adding many user-visible graphics features (new Standard shader, realtime global illumination, reflection probes, new lightmapping workflow and so on), but we've also worked on rendering internals. Besides typical things like "optimizing it" (e.g. multithreaded light culling) and "making it more consistent" (e.g. more consistently between Linear & Gamma color spaces), we've also looked at how to make it more extensible.

Internally and within the beta testing group we've discussed various approaches. A lot of ideas were thrown around: more script callbacks, assembling small "here's a list of things to do" buffers, ability to create complete rendering pipelines from scratch, some sort of visual tree/graph rendering pipeline construction tools and so on. For Unity 5, we settled on ability to create "list of things to do" buffers, which we dubbed "Command Buffers".

A command buffer in graphics is a low-level list of commands to execute. For example, 3D rendering APIs like Direct3D or OpenGL typically end up constructing a command buffer that is then executed by the GPU. Unity's multi-threaded renderer also constructs a command buffer between a calling thread and the "worker thread" that submits commands to the rendering API.

In our case the idea is very similar, but the "commands" are somewhat higher level. Instead of containing things like "set internal GPU register X to value Y", the commands are "Draw this mesh with that material" and so on.

From your scripts, you can create command buffers and add rendering commands to them (“set render target, draw mesh, …”). Then these command buffers can be set to execute at various points during camera rendering.

For example, you could render some additional objects into deferred shading G-buffer after all regular objects are done. Or render some clouds immediately after skybox is drawn, but before anything else. Or render custom lights (volume lights, negative lights etc.) into deferred shading light buffer after all regular lights are done. And so on; we think there are a lot of interesting ways to use them.

Take a look at CommandBuffer and CameraEvent pages in the scripting API documentation.

Pictures or it did not happen!

Ok, ok.

For example, we could do blurry refractions:

RenderingCommandBufferBlurryRefraction

After opaque objects and skybox is rendered, current image is copied into a temporary render target, blurred and set up as a global shader property. Shader on the glass object then samples that blurred image, with UV coordinates offset based on a normal map to simulate refraction. This is similar to what shader GrabPass does does, except you can do more custom things (in this case, blurring).

Another example use case: custom deferred lights. Here are sphere-shaped and tube-shaped lights:

RenderingCommandBufferCustomLights

After regular deferred shading light pass is done, a sphere is drawn for each custom light, with a shader that computes illumination and adds it to the lighting buffer.

Yet another example: deferred decals.

RenderingCommandBufferDecals

The idea is: after G-buffer is done, draw each “shape” of the decal (a box) and modify the G-buffer contents. This is very similar to how lights are done in deferred shading, except instead of accumulating the lighting we modify the G-buffer textures.

Each decal is implemented as a box here, and affects any geometry inside the box volume.

Actually, here's a small Unity (5.0 beta 22) project folder that demonstrates everything above: RenderingCommandBuffers50b22.zip.

You can see that all the cases above aren't even complex to implement - the scripts are about hundred lines of code.

I think this is exciting. Can't wait to see what you all do with it!

February 6, 2015 in Engine & platform | 3 min. read

Is this article helpful for you?

Thank you for your feedback!