Hello everyone. I wanted to show you something I’ve been working on last couple of FAFFs. The purpose of this post is to interest some of the technical types among you. If you don’t care about a technical discussion about an unfinished feature, you can just read about the basics or jump right to the video showing some WIP pretty pixels.
The problem we are trying to solve is how to use baked lighting on dynamic objects and characters. After lightmapping a scene all static objects have nice, high quality lighting. If we don’t do anything about the dynamic objects though, they might look dull and detached from the environment with their direct light and ambient only.
One solution is to use light probes to sample the nice baked lighting at various points in the scene. We can then interpolate the nearby probes to guess what would the lighting be at the current location of our character (or other moving object) and use the result to light the character.
Let’s see how it looks like in action! (Apologies for the cheesy lighting with over the top light bounce, but it makes it easier to illustrate the effect)
[vimeo clip_id="20385528" width="640" height="360"]
The light probes can be used to store the full incoming lighting or just the indirect lighting (plus full lighting from emissive materials and ‘baked only’ lights), i.e. the Dual Lightmaps style. Either way it’s quite flexible how we can then use the probe in the shader: do full lighting per vertex, per pixel with normal mapping or even per vertex indirect light blended with per pixel direct light with normal mapping.
To use the light probes, you need to add them to your scene, bake and voilà - all dynamic renderers with the feature enabled use light probe lighting, both in edit and play mode.
I haven’t decided yet how will placing of the probes in the scene work, as it ties in with what interpolation technique I’ll use. Probably the probes will either be placed automatically (with some global control from the user’s side) or manually as it has been done in this demo. If I go for the latter, I’ll make sure to add some utility functions to pre-place the probes over a navigation mesh, expose probe placing to scripting and possibly allow probe locations to be part of prefabs – everything to keep the manual process painless while still giving a lot of control.
You can be sure this feature will be released as soon as it’s ready and not later. I don’t know when that might be, though ;) For now light probes remain my FAFF project while I’m focusing on a bigger feature, which should make a lot of you happy as well.
Light Probes and Spherical Harmonics
To bake a light probe, we need the amount of light incoming from every possible direction – and that’s a lot of directions! But – we also know that most likely the incoming light doesn’t change that quickly between different directions. In other words it doesn’t have high frequency changes, so if we compress that data in the frequency domain on a sphere by discarding all the higher frequencies – no one should notice.
Storing the incoming light using Spherical Harmonics achieves just that. Spherical harmonics basis functions can be thought of as an equivalent of harmonics in Fourier analysis on a sphere. The more bands (groups of basis functions) we decide to take into account, the more accurately we’ll be able to reconstruct the original incoming light signal. Beast can bake light probes directly as spherical harmonics coefficients, which tell us “how much” of each of the basis functions our signal did contain. The original function describing how much light comes from a given direction can then be reconstructed as just a linear combination of the coefficients and the basis functions.
Let’s look at an example. We could ask Beast to bake one light probe for us for a given location. That light probe would just be a bunch of SH coefficients – if we chose that 3 bands are enough, that would require storing a coefficient for each of the 9 basis functions for each of the 3 color channels, so a total of 27 floats. If then a dynamic object would end up in the exact same location as the light probe, we could say – hey, we know what’s the incoming light at that position. In the vertex shader (or pixel shader, if we wanted more precision and normal mapping) we could then decode the SH coefficients for a given direction dictated by the object’s normal and light our dynamic object with the light probe that way.
The real problem: placement and interpolation
That brings us to the real problem: how do we decide where to place the probes in the scene? And how do we interpolate the probes’ values once we have a bunch of probe locations and an object sitting somewhere in between?
The useful property of spherical harmonics encoded probes is that a linear interpolation between two probes would just be a linear interpolation of their coefficients. So if we had all the probes placed along a path and our object was moving along that path (racing game, anyone?), we would just linearly interpolate the two probes at the ends of the current path segment to find the lighting.
At the same time we have to think where do we need the light probes to be located. Surely we only want them where our dynamic objects can go – no point in baking and storing probes which will never be used. Also we want the probes to encode interesting changes in lighting, but we don’t want extra probes where the lighting changes slowly or the change is something our lighting artist doesn’t care about in this spot.
At this point I should probably list the properties I would like the interpolation function to have, but they’re actually quite intuitive. Let me just emphasise the most important one: if there’s a probe at the location we’re evaluating the function for, it should return a weight of 1 for that probe and weights of 0 for all the others. This is another way of saying: light probes sample the underlying lighting information, so there’s no need for guessing there – we know what’s the lighting at those exact points.
Here’s a couple of possible solutions:
Uniform grid. The entire scene is put into a bounding box which is then subdivided to an artist-defined density and a probe is placed at each cell’s corner. This approach doesn’t require much interaction from the user and the interpolation is simple, robust and relatively cheap (it’s just a trilinear interpolation of the probes at the cell’s corner and it’s easy to find the right cell). The downside is that you often have to dial up the density quite a bit to capture that light you care about in just one spot, which makes a lot of the probes filling the major part of the volume completely useless – and that wastes a lot of memory. Also you will get into situations when there’s a row of probes close to the wall and a row of probes just in the wall or on the other side of it (so e.g. much darker): your character approaches the wall and suddenly the darkness starts to creep in although it shouldn’t. The way this has been solved for Milo and Kate and at least one other current title I know about is that each probe encodes additional visibility data which limits it’s influence up to the nearest obstacle. This however adds to the memory footprint and interpolation time. It also might introduce artefacts on it’s own if the grid is not fine enough.
Adaptively subdivided grid. It’s a concept similar to the one above, except that probe density can vary where needed. The structure could be an octree, in which we keep subdividing cells if we expect changes in lighting that need capturing. A good heuristic for doing so might be testing if current cell contains scene geometry – if it does, there’s a better chance of higher-frequency changes in the lighting. After the probes have been baked there’s also the possibility of clustering similar probes. This approach should solve the “memory monster” issue of the uniform grid, but at a cost of slightly more complex and branchy search and interpolation code. It still needs to store visibility information like the previous solution and the subdivision heuristic might be wrong.
K nearest probes. With this approach we just search for K nearest probes and interpolate between them. This time probes don’t need to be placed on a grid – we can put them anywhere. The biggest issue here is that the set of probes used for interpolation can change suddenly at any time – even when the just-excluded and just-included probes had high interpolation weights, resulting in visible light popping. To minimise this effect, we can employ some damping – interpolate from the old value to the new over time. The delay will be visible in some cases, but still better than a sudden pop of lighting.
Tetrahedralisation. Once the probes have been placed, we can find the Delaunay tetrahedralisation of such point set. To find the interpolated probe for our current location, we first search for the containing tetrahedron and then calculate barycentric coordinates which can be used as interpolation weights for the four vertices of the tetrahedron. Finding the containing tetrahedron can be done efficiently by always starting to test the last one we were in. If the test fails (it usually won’t), we find the tetrahedron’s normal most accurately pointing towards our location (highest dot product) and move to the tetrahedron adjacent in that direction.
The first two solutions can be made fully automatic and that’s definitely an advantage. On the other hand somehow I can’t accept the fact that we won’t have more direct control over which areas are important to sample and which aren’t, which will lead to over-sampling and wasting memory in some areas and under-sampling and loosing information in other areas at the same time.
The other two solutions have the advantage of giving us control over where the probes are placed. That might become a problem, though, as when the level geometry changes, manual work has to be re-done. A couple of things could be made to improve the work flow there, like automatic pre-placing of probes over the nav mesh – and these initial positions could then be modified by hand. Also, with probe positioning exposed to scripting, developers could write scripts automatically placing probes in the areas which make sense for a given game and artist workflow, but couldn’t be generalised enough to be included in Unity. Probes could also be parented to objects to move with them (only at edit time) or made part of prefabs.
The K-nearest probes approach seems quite reasonable and it has been used in a couple of successful games. If K is low, the interpolated probe can be calculated efficiently and it’s easy to ensure the property of getting the probe’s value when at it’s exact position. The issues that still bother me is at times unexpected interpolation (K-nearest probes aren’t always the ones you would expect) and temporal damping trying to compensate popping, but sometimes introducing visible light changes when the interpolation tries to catch up even if the object doesn’t move any more.
The video above shows the first interpolation scheme I tried out: 2-nearest probes with temporal damping. I project the character’s center onto the line passing through the two nearest probes and linearly interpolate if the position falls in between or clamp if it falls outside of the segment. If one or both of the nearest probes change, I take a snapshot of the interpolated probe and lerp it out over time as the new interpolated probe lerps in. I think the results are acceptable, but that’s for you to judge. There’s a couple of moments when the character has already stopped and the light is still catching up. The speed of that interpolation can be tweaked per object and can also be controlled from script as a function of e.g. object’s current speed.
The probes in the video are baked in a similar fashion to near lightmaps in Dual Lightmaps, so they exclude direct lighting from Auto lights, but they do include their indirect lighting contribution and also full contribution from Baked Only lights and emissive materials (like the green puddle of goo by the barrel). Their contribution is calculated per vertex, while real-time direct light is calculated per pixel with a normal-mapped specular material. All this is handled internally by the surface shader framework, so the shader on the character is just the built-in bump specular.
Next I will probably try out the Delaunay tetrahedralisation approach. I’m having high hopes for that one, as it seems it should result in interpolation closer to what we intuitively expect while retaining the fine-grained control. Also memory consumption and search and interpolation performance should be on par with K-nearest probes and adaptively subdivided grid, but I should probably test some actual implementations on real-world scenes before getting too attached to those claims. The biggest worry here is that Delaunay tetrahedralisation is at best tricky if the input data forms degenerated patterns (e.g. all points along a line) and it might still create long and thin tetrahedra, especially at the hull surface.
One option worth investigating here is that if we discover all points are roughly co-planar (or at least don’t form more than one layer), the entire problem can be brought down to Delaunay triangulation in 2D and interpolation in 2D as well.
This is it for now. It would be good to hear your suggestions on the topic, so feel free to comment :)