Intermediate rendering, or what goes into a feature

December 6, 2007 in Technology

We’ve got a feature we want to do — it’s something we call “intermediate mode render calls”. The idea is that some script could say:

Draw( position, rotation, mesh, material );

And that would make the mesh appear with the given material at the given position on the screen, and it would just work with pixel lights, shadows, projectors and whatnot.


A step aside on why this is needed: in Unity 2.0 the terrain engine does not fully integrate with pixel lights or shadows; and Projectors don’t work with it either. The reason is that terrain is rendered immediately: at some point terrains gets it’s render call, draws the chunks of the terrain, the trees, the detail objects and all of this goes straight to the screen.

For shadows, pixel lights or projectors to work, however, something has to be remembered, so that the most important lights can be calculated for each object, so that shadow casters could be rendered, and so on.

With intermediate mode render calls the terrain engine could “submit” this lightweight rendering information: mesh, material, position; and the rendering system would figure out everything else at rendering time.

Ok, the above sounds like a plan. What goes into developing such feature?

The basic functionality is obvious. Draw() above would not actually draw anything immediately, just merely record the information somewhere in the scene: “this mesh with this material will have to be drawn here”. At the end of the frame, all this information would be discarded, so Draw() persists for one frame only.

Some design issues immediately pop up:

  • Will this mesh be rendered in all cameras, or just single one? (which one?)
  • Maybe it’s drawn in all cameras if called from inside Update(), and drawn for current camera if called from inside OnPreCull() or OnPreRender()?
  • Or maybe we add an explicit “camera” parameter; and the mesh will be rendered in all cameras if no camera is indicated?
  • What to do if Draw() is called from inside of Start(), Awake() and other functions that are not related to “frame” or “camera”?

Some of the work required is also obvious: Right now all renderable things in Unity are Renderer components. That means they live in some game object, and so on. For intermediate mode calls, we need to put base rendering functionality somewhere “up” – so that a renderable thing can not necessarily be a Component in a game object.

  • Make a leaner base renderer class that does not derive from Component (call it BaseRenderer). Move out base functionality from Renderer to BaseRenderer as needed. Make Renderer be both a Component and BaseRenderer.
  • Make the rendering code (that figures out lights, does shadows, …) operate on BaseRenderers, not Renderers.
  • Make IntermediateRenderer that would be used for intermediate mode. It has transformation matrix, pointers to mesh and material. Make it setup and update all related internal structures that describe Unity scene.
    • Possibly need distinction between scene-wide meshes and camera-specific meshes.
    • Clear all intermediate renderers at end of frame. And make this work in the editor with multiple views into the scene present.

With the above, we can go and write some code to implement the feature. Look ma, no game objects are here, just two lines of code drawing 100 meshes with random rotation in a loop! And it works with shadows!

Look ma, no game objects here!

So the proof that the base feature works is there already (we say it’s a killer feature for making demoscene demos). We still have the design issues of the above, and some more that popped up while writing the code:

  • Is culling performed on these meshes or not? It’s useful if the mesh will be rendered for all cameras, and potentially redundant if the mesh is drawn for single camera only.
  • Are layer masks indicated anywhere? Add a parameter to the function?
  • What if for each mesh rendered I want to change something in the material? (change some value for a shader, use a different texture, different tint color, …) Should I instantiate materials myself? Should the API instantiate materials internally? Should I supply variable number of arguments saying “and do these changes to the material before drawing”?

It’s a long rocky road between thinking up a feature and shipping it. Surprisingly, actual development does not take up much time; the most difficult part is answering all the design questions.

Anyway, after all design issues are solved (or… ignored!), the feature usually is shipped. Yay!

Comments (6)

Subscribe to comments
  1. married dating guy

    May 21, 2011 at 6:34 am / 

    Thanks for taking the time to discuss this, I feel strongly about it and love learning extra on this topic. If possible, as you achieve expertise, would you mind updating your blog with extra info? This can be very useful for me.

  2. Aras Pranckevičius

    September 17, 2008 at 8:13 am / 

    @TomW: exactly, and that’s the point of this post. After one decides “ok, Draw() won’t actually draw, but merely add information to some list” – what implications does this decision have, how it interacts with all the other systems and so on.

  3. TomW

    September 16, 2008 at 11:24 pm / 

    Why not generate a list of things to draw; instead of draw() generating a list of things to draw. Say spatialSystem->GetListofVisible(&visibleEnts) entities.

    This way you get the visibleEnts which you can filter for reflections, shadows, z only, post processing, other cameras, sort the list, maybe reuse the list next frame,etc.

    Immeditate rendering, like you noticed with terrain, is hard because of the need to re-render,process or whatever later
    -t

  4. Aras Pranckevičius

    April 20, 2008 at 7:32 pm / 

    “It seems that a loop of a 1000 Draw() will generate only one draw call, is it correct?”
    Not really. It will act just like 1000 objects were in the scene. So it might be much less draw calls, if some of those objects are not visible; or might even be more than 1000 draw calls, if the objects are affected by multiple pixel lights and/or are casting shadows, etc.

    “Something which doesn’t show up in the explorer or the inspector might be tricky to debug, no?”
    Maybe. This functionality is mostly for us, so that we can implement terrain+lighting properly, and for people who want to do something similar on their own. Yes, it is sort of “advanced” feature.

  5. laurent

    April 18, 2008 at 8:52 pm / 

    This is awesome!
    It seems that a loop of a 1000 Draw() will generate only one draw call, is it correct?
    Do you think it will draw bone deformed meshes as well?

    Something which doesn’t show up in the explorer or the inspector might be tricky to debug, no ?
    One solution, would be adding an IntermediateRender component which listens to all the Draw() calls in a similar way a NetworkView() listens to all the RPC. What do you think.

  6. Dom

    December 20, 2007 at 7:44 pm / 

    This is interesting. Although I don’t know much about renderer implementations, I can see that this is a challenge.
    For me another aspect is interesting and that’s reusing components functionality. In Virtools there is a lot of functionality in Components (Behaviours/BuildingBlocks). The problem with this is when you need a variation or when you need ‘some’ functionality of that. Usually I wind up with copy/pasting the whole thing for creating my desired component variation, or I copy the few functions I want to use.
    Pushing *some* functionality out of components into an API helps with re-usability which is funny because components is mostly about modularity and re-usability. In addition to that having to much inside an API may break lots of components when changing the API later. I guess the best is the middle of both worlds.

Comments are closed.