Intermediate rendering, or what goes into a feature
We’ve got a feature we want to do — it’s something we call «intermediate mode render calls». The idea is that some script could say:
Draw( position, rotation, mesh, material );
And that would make the mesh appear with the given material at the given position on the screen, and it would just work with pixel lights, shadows, projectors and whatnot.
A step aside on why this is needed: in Unity 2.0 the terrain engine does not fully integrate with pixel lights or shadows; and Projectors don’t work with it either. The reason is that terrain is rendered immediately: at some point terrains gets it’s render call, draws the chunks of the terrain, the trees, the detail objects and all of this goes straight to the screen.
For shadows, pixel lights or projectors to work, however, something has to be remembered, so that the most important lights can be calculated for each object, so that shadow casters could be rendered, and so on.
With intermediate mode render calls the terrain engine could «submit» this lightweight rendering information: mesh, material, position; and the rendering system would figure out everything else at rendering time.
Ok, the above sounds like a plan. What goes into developing such feature?
The basic functionality is obvious. Draw() above would not actually draw anything immediately, just merely record the information somewhere in the scene: «this mesh with this material will have to be drawn here». At the end of the frame, all this information would be discarded, so Draw() persists for one frame only.
Some design issues immediately pop up:
- Will this mesh be rendered in all cameras, or just single one? (which one?)
- Maybe it’s drawn in all cameras if called from inside Update(), and drawn for current camera if called from inside OnPreCull() or OnPreRender()?
- Or maybe we add an explicit «camera» parameter; and the mesh will be rendered in all cameras if no camera is indicated?
- What to do if Draw() is called from inside of Start(), Awake() and other functions that are not related to «frame» or «camera»?
Some of the work required is also obvious: Right now all renderable things in Unity are Renderer components. That means they live in some game object, and so on. For intermediate mode calls, we need to put base rendering functionality somewhere «up» – so that a renderable thing can not necessarily be a Component in a game object.
- Make a leaner base renderer class that does not derive from Component (call it BaseRenderer). Move out base functionality from Renderer to BaseRenderer as needed. Make Renderer be both a Component and BaseRenderer.
- Make the rendering code (that figures out lights, does shadows, …) operate on BaseRenderers, not Renderers.
- Make IntermediateRenderer that would be used for intermediate mode. It has transformation matrix, pointers to mesh and material. Make it setup and update all related internal structures that describe Unity scene.
- Possibly need distinction between scene-wide meshes and camera-specific meshes.
- Clear all intermediate renderers at end of frame. And make this work in the editor with multiple views into the scene present.
With the above, we can go and write some code to implement the feature. Look ma, no game objects are here, just two lines of code drawing 100 meshes with random rotation in a loop! And it works with shadows!
So the proof that the base feature works is there already (we say it’s a killer feature for making demoscene demos). We still have the design issues of the above, and some more that popped up while writing the code:
- Is culling performed on these meshes or not? It’s useful if the mesh will be rendered for all cameras, and potentially redundant if the mesh is drawn for single camera only.
- Are layer masks indicated anywhere? Add a parameter to the function?
- What if for each mesh rendered I want to change something in the material? (change some value for a shader, use a different texture, different tint color, …) Should I instantiate materials myself? Should the API instantiate materials internally? Should I supply variable number of arguments saying «and do these changes to the material before drawing»?
It’s a long rocky road between thinking up a feature and shipping it. Surprisingly, actual development does not take up much time; the most difficult part is answering all the design questions.
Anyway, after all design issues are solved (or… ignored!), the feature usually is shipped. Yay!