Search Unity

After a bit of radio silence we’re back with another Unity 3 feature preview.

In this short video Will Goldstone shows some of the benefits of the new deferred rendering path in Unity 3.0 such as being able to have dozens of lights on screen at the same time without a performance hit as well as using advanced image effects that utilize the same depth and normal textures as deferred rendering.

[vimeo clip_id=»14832454″ width=»640″ height=»360″]


And stayed tuned for the Occlusion Culling video… we promise it’s almost ready ;)

20 replies on “Unity 3 Feature Preview – Deferred Rendering”


Are all these object covered with emissive materials or just light sources added to each of them?

Thank you!

Best regards,

WHAT! Pro only? That is just damn great :/. I was really looking forward to unity 3 aswell. Oh well, I guess that’s where you make the money.

@Dave: if deferred rendering is not supported, it will fallback to forward shader based rendering. If shaders are not supported, it will fallback to fixed function per-vertex lit rendering path.

@edvinas: yeah, anti-aliasing and deferred rendering techniques do not easily mix together. Like Ethan said, we are providing an edge-blur postprocessing effect to somewhat help with this. It’s not proper anti-aliasing, but helps to hide jagged edges.

1) yes, shadows still cost a lot, primarily because shadow casters need to be rendered. So yeah, the cost of rendering into the shadow map is very similar to forward rendering. The cost of applying the shadow map (rendering the receivers) is much cheaper in deferred rendering though; it happens in screen space just like for non-shadowed lights.

2) It’s «deferred lighting» aka «light pre-pass». Usually there’s one color target (world space normals in RGB, specular exponent in A), and depth is just fetched from the native depth buffer. On platforms/GPUs that don’t support reading depth buffer as a texture, depth is contained in another render target. So yeah, the G-buffer is really small, but does not have any fancy data like motion vectors.

3) It depends. Post-processing based edge blurring is not «real» anti-aliasing (it does not have access to sub-pixel data), i.e. it does not look as good as real anti-aliasing. Cost of native GPU anti-aliasing (multi-sampling) is mostly larger video memory requirements, and then all triangle edges cost more, proportionally to the anti-aliasing level.

Couple of questions:

1) We are talking about lights without shadows, right? I’m guessing it’s impossible to calculate shadows only from the normals you get from the G-Buffer. So lights with shadows will have more or less the same performance hit as previous unity versions? (not that having that many lights, even without shadows, isn’t amazing)

2) What does the gbuffer in your deferred rendering implementation contain? I’m assuming normals and depth at the very least (which is why the dof becomes so much cheaper), but how about more exotic buffers such as screen-space 2d motion vectors, which would make motion blur cheaper and also look great?

3) About the EdgeBlurEffectNormals, am I right to assume that it’s somewhat faster than traditional antialiasing? It won’t be as accurate of course but it should be easier to calculate, right?

@edvinas – there are image effects that can be used to blur the edges of meshes at certain levels of differentiation in the depth buffer. There is is actually an image effect included in the «Image Effects» standard assets package that does this called «EdgeBlurEffectNormals».

Quick question… Will the anti-aliasing work with deferred rendering? Or will there be any other alternative to have the anti-aliasing effect with deferred rendering?
Because at the moment it looks really lame without the anti-aliasing on Unity 3 beta7, when compared to simple forward lighting and anti-aliasing on…


What happens if you set the player to deferred rendering and it’s run with a card that doesn’t support it? Does it fall back to forward rendering?

@Vectrex: dynamic batching transforms small (in vertex count) objects that share all other parameters (material, lighting set etc.) on the CPU, and draws them in one draw call. It has no relation to physics; only for rendering. Unity iPhone 1.7 had this, and with 3.0 it’s coming to all platforms.

@Rich: yeah, it does have some overhead. Basically, if you only use one or two lights, forward rendering might be faster. Also, deferred requires certain level of hardware support (e.g. Shader Model 3.0), and on lower GPUs it just won’t work.

My question then is when/why would you ever using forward lighting, since deferred is less CPU/GPU intensive. Or does it have an initially large hit, and is only better after a certain number of lights are added?

Now I’m feeling really proud of being a pro user.

I’ve always tried to limit the number of dynamic lights to a minimum, but I think with this I’ll be able to make some interesting light moods in my games!

A cool feature indeed!

Hmm I spy ‘dynamic batching’? Instancing? Splitting/merging of sleeping physics objects? Instancing would be a big performance boost in our case (lots of identical moving objects)

Nice, this is exactly what my project needs! Will this be in both indie and pro?
Also, nice editor- i love that dark theme :). Does thios mean the editor will be customizable (in a visual sense)?
Otherwise, amazing. Absolutely amazing!

@marty: Deferred rendering will not be supported for use in iOS/Android applications as the devices just aren’t robust enough for it.

@tino: No, you’ll still need to include that at the top of every script to enable it. And our core iOS implementation and support isn’t changing drastically from the current experience and the examples and demos will behave the same. If you have more questions about all this then please redirect those to the forums and leave comments here to be focused on Unity 3.0’s deferred rendering features. Thanks!

i have question regarding Unity iphone, is #pragma strict enabled by default for unity 3 or you have to enable it manually ? Also I haven’t found much information about iphone implementation in unity 3, how will project and example demos behave ?

Should those of us authoring for mobile devices and thus using only one light (if that) turn on deferred lighting – or is it no big deal without multiple scene lights?

Comments are closed.