Light probes

March 9, 2011 in Technology

Hello everyone. I wanted to show you something I’ve been working on last couple of FAFFs. The purpose of this post is to interest some of the technical types among you. If you don’t care about a technical discussion about an unfinished feature, you can just read about the basics or jump right to the video showing some WIP pretty pixels.

The Basics

The problem we are trying to solve is how to use baked lighting on dynamic objects and characters. After lightmapping a scene all static objects have nice, high quality lighting. If we don’t do anything about the dynamic objects though, they might look dull and detached from the environment with their direct light and ambient only.

One solution is to use light probes to sample the nice baked lighting at various points in the scene. We can then interpolate the nearby probes to guess what would the lighting be at the current location of our character (or other moving object) and use the result to light the character.

Let’s see how it looks like in action! (Apologies for the cheesy lighting with over the top light bounce, but it makes it easier to illustrate the effect)

[vimeo clip_id="20385528" width="640" height="360"]

The light probes can be used to store the full incoming lighting or just the indirect lighting (plus full lighting from emissive materials and ‘baked only’ lights), i.e. the Dual Lightmaps style. Either way it’s quite flexible how we can then use the probe in the shader: do full lighting per vertex, per pixel with normal mapping or even per vertex indirect light blended with per pixel direct light with normal mapping.

To use the light probes, you need to add them to your scene, bake and voilà - all dynamic renderers with the feature enabled use light probe lighting, both in edit and play mode.

I haven’t decided yet how will placing of the probes in the scene work, as it ties in with what interpolation technique I’ll use. Probably the probes will either be placed automatically (with some global control from the user’s side) or manually as it has been done in this demo. If I go for the latter, I’ll make sure to add some utility functions to pre-place the probes over a navigation mesh, expose probe placing to scripting and possibly allow probe locations to be part of prefabs – everything to keep the manual process painless while still giving a lot of control.

You can be sure this feature will be released as soon as it’s ready and not later. I don’t know when that might be, though ;) For now light probes remain my FAFF project while I’m focusing on a bigger feature, which should make a lot of you happy as well.

The Details

Light Probes and Spherical Harmonics

To bake a light probe, we need the amount of light incoming from every possible direction – and that’s a lot of directions! But – we also know that most likely the incoming light doesn’t change that quickly between different directions. In other words it doesn’t have high frequency changes, so if we compress that data in the frequency domain on a sphere by discarding all the higher frequencies – no one should notice.

Storing the incoming light using Spherical Harmonics achieves just that. Spherical harmonics basis functions can be thought of as an equivalent of harmonics in Fourier analysis on a sphere. The more bands (groups of basis functions) we decide to take into account, the more accurately we’ll be able to reconstruct the original incoming light signal. Beast can bake light probes directly as spherical harmonics coefficients, which tell us “how much” of each of the basis functions our signal did contain. The original function describing how much light comes from a given direction can then be reconstructed as just a linear combination of the coefficients and the basis functions.

LightProbes2

Let’s look at an example. We could ask Beast to bake one light probe for us for a given location. That light probe would just be a bunch of SH coefficients – if we chose that 3 bands are enough, that would require storing a coefficient for each of the 9 basis functions for each of the 3 color channels, so a total of 27 floats. If then a dynamic object would end up in the exact same location as the light probe, we could say – hey, we know what’s the incoming light at that position. In the vertex shader (or pixel shader, if we wanted more precision and normal mapping) we could then decode the SH coefficients for a given direction dictated by the object’s normal and light our dynamic object with the light probe that way.

The real problem: placement and interpolation

That brings us to the real problem: how do we decide where to place the probes in the scene? And how do we interpolate the probes’ values once we have a bunch of probe locations and an object sitting somewhere in between?

The useful property of spherical harmonics encoded probes is that a linear interpolation between two probes would just be a linear interpolation of their coefficients. So if we had all the probes placed along a path and our object was moving along that path (racing game, anyone?), we would just linearly interpolate the two probes at the ends of the current path segment to find the lighting.

At the same time we have to think where do we need the light probes to be located. Surely we only want them where our dynamic objects can go – no point in baking and storing probes which will never be used. Also we want the probes to encode interesting changes in lighting, but we don’t want extra probes where the lighting changes slowly or the change is something our lighting artist doesn’t care about in this spot.

LightProbes1

At this point I should probably list the properties I would like the interpolation function to have, but they’re actually quite intuitive. Let me just emphasise the most important one: if there’s a probe at the location we’re evaluating the function for, it should return a weight of 1 for that probe and weights of 0 for all the others. This is another way of saying: light probes sample the underlying lighting information, so there’s no need for guessing there – we know what’s the lighting at those exact points.

Here’s a couple of possible solutions:

Uniform grid. The entire scene is put into a bounding box which is then subdivided to an artist-defined density and a probe is placed at each cell’s corner. This approach doesn’t require much interaction from the user and the interpolation is simple, robust and relatively cheap (it’s just a trilinear interpolation of the probes at the cell’s corner and it’s easy to find the right cell). The downside is that you often have to dial up the density quite a bit to capture that light you care about in just one spot, which makes a lot of the probes filling the major part of the volume completely useless – and that wastes a lot of memory. Also you will get into situations when there’s a row of probes close to the wall and a row of probes just in the wall or on the other side of it (so e.g. much darker): your character approaches the wall and suddenly the darkness starts to creep in although it shouldn’t. The way this has been solved for Milo and Kate and at least one other current title I know about is that each probe encodes additional visibility data which limits it’s influence up to the nearest obstacle. This however adds to the memory footprint and interpolation time. It also might introduce artefacts on it’s own if the grid is not fine enough.

Adaptively subdivided grid. It’s a concept similar to the one above, except that probe density can vary where needed. The structure could be an octree, in which we keep subdividing cells if we expect changes in lighting that need capturing. A good heuristic for doing so might be testing if current cell contains scene geometry – if it does, there’s a better chance of higher-frequency changes in the lighting. After the probes have been baked there’s also the possibility of clustering similar probes. This approach should solve the “memory monster” issue of the uniform grid, but at a cost of slightly more complex and branchy search and interpolation code. It still needs to store visibility information like the previous solution and the subdivision heuristic might be wrong.

K nearest probes. With this approach we just search for K nearest probes and interpolate between them. This time probes don’t need to be placed on a grid – we can put them anywhere. The biggest issue here is that the set of probes used for interpolation can change suddenly at any time – even when the just-excluded and just-included probes had high interpolation weights, resulting in visible light popping. To minimise this effect, we can employ some damping – interpolate from the old value to the new over time. The delay will be visible in some cases, but still better than a sudden pop of lighting.

Tetrahedralisation. Once the probes have been placed, we can find the Delaunay tetrahedralisation of such point set. To find the interpolated probe for our current location, we first search for the containing tetrahedron and then calculate barycentric coordinates which can be used as interpolation weights for the four vertices of the tetrahedron. Finding the containing tetrahedron can be done efficiently by always starting to test the last one we were in. If the test fails (it usually won’t), we find the tetrahedron’s normal most accurately pointing towards our location (highest dot product) and move to the tetrahedron adjacent in that direction.
The first two solutions can be made fully automatic and that’s definitely an advantage. On the other hand somehow I can’t accept the fact that we won’t have more direct control over which areas are important to sample and which aren’t, which will lead to over-sampling and wasting memory in some areas and under-sampling and loosing information in other areas at the same time.

LightProbes3

The other two solutions have the advantage of giving us control over where the probes are placed. That might become a problem, though, as when the level geometry changes, manual work has to be re-done. A couple of things could be made to improve the work flow there, like automatic pre-placing of probes over the nav mesh – and these initial positions could then be modified by hand. Also, with probe positioning exposed to scripting, developers could write scripts automatically placing probes in the areas which make sense for a given game and artist workflow, but couldn’t be generalised enough to be included in Unity. Probes could also be parented to objects to move with them (only at edit time) or made part of prefabs.

The K-nearest probes approach seems quite reasonable and it has been used in a couple of successful games. If K is low, the interpolated probe can be calculated efficiently and it’s easy to ensure the property of getting the probe’s value when at it’s exact position. The issues that still bother me is at times unexpected interpolation (K-nearest probes aren’t always the ones you would expect) and temporal damping trying to compensate popping, but sometimes introducing visible light changes when the interpolation tries to catch up even if the object doesn’t move any more.

The video above shows the first interpolation scheme I tried out: 2-nearest probes with temporal damping. I project the character’s center onto the line passing through the two nearest probes and linearly interpolate if the position falls in between or clamp if it falls outside of the segment. If one or both of the nearest probes change, I take a snapshot of the interpolated probe and lerp it out over time as the new interpolated probe lerps in. I think the results are acceptable, but that’s for you to judge. There’s a couple of moments when the character has already stopped and the light is still catching up. The speed of that interpolation can be tweaked per object and can also be controlled from script as a function of e.g. object’s current speed.

The probes in the video are baked in a similar fashion to near lightmaps in Dual Lightmaps, so they exclude direct lighting from Auto lights, but they do include their indirect lighting contribution and also full contribution from Baked Only lights and emissive materials (like the green puddle of goo by the barrel). Their contribution is calculated per vertex, while real-time direct light is calculated per pixel with a normal-mapped specular material. All this is handled internally by the surface shader framework, so the shader on the character is just the built-in bump specular.

Next I will probably try out the Delaunay tetrahedralisation approach. I’m having high hopes for that one, as it seems it should result in interpolation closer to what we intuitively expect while retaining the fine-grained control. Also memory consumption and search and interpolation performance should be on par with K-nearest probes and adaptively subdivided grid, but I should probably test some actual implementations on real-world scenes before getting too attached to those claims. The biggest worry here is that Delaunay tetrahedralisation is at best tricky if the input data forms degenerated patterns (e.g. all points along a line) and it might still create long and thin tetrahedra, especially at the hull surface.

One option worth investigating here is that if we discover all points are roughly co-planar (or at least don’t form more than one layer), the entire problem can be brought down to Delaunay triangulation in 2D and interpolation in 2D as well.

This is it for now. It would be good to hear your suggestions on the topic, so feel free to comment :)

Comments (46)

Subscribe to comments
  1. Wolfos

    June 8, 2011 at 5:45 am / 

    So it’s actually colour bleeding, but faster pre computation, as it only needs to be computed for the light probes?
    They look nice, but as the effect should be very subtle to be realistic, I doubt you will want to sacrifice processing power for so little detail at this point.

  2. Robert Cupisz

    March 30, 2011 at 6:35 am / 

    The nice thing about DTet is that it gives me an interpolation with all the properties I think are really essential, like a probe influence lerping off to exactly 0 before a new one starts lerping in from exactly 0. Relaxing the assumptions about DTet can easily degenerate this method to one of the others with their own problems, but I’ll see what I can do ;)

    There’s no need to construct any ‘tetrahedron geometry’ when calculating the barycentric coordinates. When you have the four points, it’s just throwing their coords into a 3×3 matrix, inverting the matrix and multiplying your position by it – easy, peasy.

    That paper discusses a nice optimisation to triangulation with at an approximate step on the gpu with refining on the cpu, but hey – I don’t really want to bother optimising (the offline part) of a technique that maybe doesn’t even work like I’d like it to ;)

    And yea, I saw TetGen and it looks pretty cool – I’ll definitely check it out when optimizing the tetrahedralization. But for now I’ll just do the naive tetrahedralization and see on how many points it chokes ;) I just need to get my current project out of the way…

  3. Sam Martin

    March 30, 2011 at 4:46 am / 

    Having said all that, I just found this library which I haven’t seen before:
    http://tetgen.berlios.de/

    Sounds cool. Haven’t looked at it at all.

    I note that it relies on Shewchuk’s ‘fast and robust predicates’ library which although is theoretically very cool, I’ve been rather suspicious about because I’m not yet convinced about the ‘robust’ element of the actual implementation.

    ta,
    Sam

  4. Sam Martin

    March 30, 2011 at 4:37 am / 

    It’s not *that* much easier, I’ll grant you :) You can use an algorithm that compares a lot of light probe points to your target sample point than would be feasible if you wanted the entire Dtet. You ‘just’ need to find the set of points that can affect your target point. As a bonus, once you know this I think you can build the barycentric weights for your point without actually having to construct the actual tetrahedron geometry. Although I should warn you – I haven’t actually done this in 3d (but have done very similar things in 2D).

    Another idea that springs to mind is to use some brute force-like approach and splat falloff distance and ids using the gpu. Similar to this: http://www.comp.nus.edu.sg/~tants/delaunay.html
    It’s approximate, but again, this might be fine.

    Delaunay interpolation is cool because it ensures you interpolate through the points you know are correct. However, you can interpolate your data – with blurring – if you relax this restriction. For instance, an averaging box filter will do something useful. My suspicion is that hacking down this line may turn out usable results with less effort.

    But then again, delaunay tetrahedralisation is cool :)

  5. Robert Cupisz

    March 29, 2011 at 3:48 pm / 

    Hmm, I’m not sure what would be the advantage of your first suggestion. I need to be doing my “which probes should I interpolate between” queries at runtime for dynamic objects. Finding a DTetrahedron encompassing my current location is not a global operation on the set of all points, that’s true, but it is not as local as just getting 4 nearest points, so I wouldn’t know how to calculate it efficiently. But maybe I misunderstood you?

    “Less than X away from me” means we have to have an X, which in some cases will be too small and not even contain a single probe and in other cases – contain too many probes which would give a “blurry” result even if we figured out how to interpolate them. We could vary X, but again – based on what? ;)

  6. Sam Martin

    March 29, 2011 at 8:23 am / 

    Hi Robert,

    One thought (which I think fits with your per-object approach) is that you can think of a DTet as being just an acceleration structure. So if you are doing the “what are my nearest points” query offline, you don’t necessary need to build a DTet to answer this in a reasonable amount of time. To get the right answer you just need to find the points that bound your sample point and no other sample points – essentially, it’s easier to compute one delaunay cell for a given sample point, than it is to compute the entire thing.

    As another idea, you can insert some extra assumption, like “I only want to interpolate between probes that are less than X away from me”, which bounds the search and may make doing an efficient lookup faster.

    And then there are approximations to the delaunay interpolation that interpolate the probes in a different, but sufficiently acceptable manner. Without more light probe points to hand, there is no right answer as to how your probes should be interpolated so you are allowed creative freedom here :)

    A while back, I did a quick 2D test (in Processing) a while ago to see how well you could approximate the “correct” delaunay interpolation with just averaging points into a grid and using that data to do the interpolation. The answer was the grid representation was much much worse than the correct interpolation when the grid resolution had less or about the same number of voxels as the number of light probe points. But having said that, if you can work with a grid that’s high resolution enough, you can probably do a better approximation. I didn’t spend anymore than a few hours on it though, so I suspect a version of this approach that isn’t quite as brain-dead might have some legs. Particularly if you can rely on your original light probes to be non-crazy in their distribution.

    Anyway, just a few thoughts!

    ta,
    Sam

  7. Robert Cupisz

    March 28, 2011 at 4:40 am / 

    @laurent:
    1. For sure it would be good if prefabs could at least contain probe locations (so that you don’t have to place probes in every instance of the same building separately), but it’s a good point that for some cases actual baked probes should sit in the prefab as well (so that you don’t have to re-bake or re-paint probes for new instances – especially if you instantiate at runtime).

    In principal there should be no limitation about the probe volume of each prefab not intersecting, but that depends on the interp method, so we’ll see.

    2. And yes, I should make it possible to paint probes by hand, but most likely it will just be exposed as an API to supply your own light probe SH coefficients – so if you want to actually paint the light on the probes with a brush, someone (from your team or on the asset store ;)) would write an editor script for it.

    The exception might be a case of a single probe in the scene that could act as a fancier ambient or image based lighting. We could do it so that you supply your IBL image and Unity performs the diffuse convolution and bakes it into the probe.

    3. Hmm, I don’t think it would work well in a general case. And in special cases you could just have the light which you want shadows from baked into the probes – then the probes in the shadow just wouldn’t contain that light.

  8. Robert Cupisz

    March 28, 2011 at 3:59 am / 

    @Sam Martin: I couldn’t dedicate any time to light probes since the blog post, but I hope it’ll happen soon. The way I see it:
    1. Light probes were always meant to be a super-low overhead feature, so only per-object interpolation was on the table.
    On the other hand I plan to open light probe editing, baking and fetching at runtime to scripting as much as possible together with hooks in the surface shaders, so it will be only a matter of a script and a shader to do per-pixel interpolation, if the game requires that and wouldn’t benefit from the built-in interpolation.
    2. Heh, that’s my biggest fear as well – especially with degenerated probe configurations. Well, we’ll see ;) There are a few libraries out there, but maybe writing our own tetrahedralization will turn out to fit our needs best, as usual.
    3. I’m all ears! From what I see though, every simplification comes at a price.

    Would be great to hear what you guys will come up with! :)

  9. laurent

    March 18, 2011 at 2:54 am / 

    @Robert: If I understand you, baking probes around a building or a tree (to get the green hue below canopy) and attaching them to the prefab would allow me to instantiate the building or tree with its own lighting and a receiver would transition from one to the other as long as the probes volume of each prefab don’t intersect ?
    If so that’s good enough. If we can move probes within a sub volume that’s better, it would allow to do without direct lighting entirely (think giant glow-worms in a forest :)

    2. I’m more of an artist and I switched my game to image based lighting because it allows greater artistic control, non photoreal painterly atmosphere (and super low drawcall). Will you allow to paint probes by hand ?

    3. Finally can you allow change of probe intensity ? (minus = shadow)

  10. Sam Martin

    March 18, 2011 at 2:33 am / 

    Hi Robert,

    I’d be interested to know how you get on with the delaunay tetrahedralisation. This is something we (Geomerics) considered but haven’t explored yet. The main things that puts me off are:
    - you can’t do the lookup on the GPU. So it’s a per-object thing. More structured point sets (ie. grids) allow you to do per-pixel interp and use hardware acceleration.
    - robustly computing a DTet is not impossible, but it’s hard :) If you find a good library to do this please let me know!
    - there are simpler hackier ways to achieve the same kind of thing.

    Cheers,
    Sam

  11. Tessa BierVliet

    March 16, 2011 at 8:29 am / 

    Nice work with these new Beast features.

  12. Janord

    March 15, 2011 at 1:06 pm / 

    @Marielle Kroes

    Yes,that would be nice.
    But please stick to the topic.

    For additional requests go here : http://feedback.unity3d.com/

  13. Marielle Kroes

    March 15, 2011 at 1:04 pm / 

    It would be nice also if unity supported material instancing.
    That would also give a performance boost.

  14. fweet

    March 14, 2011 at 10:56 pm / 

    Just put in the grid thing now, we need this feature ASAP. We can switch to potentially better method later.

  15. Robert Cupisz

    March 14, 2011 at 6:56 am / 

    @Bianca: You can request a feature here http://feedback.unity3d.com and then both other users and Unity devs will prioritize it against other upcoming features :)

    For sanity, let’s keep discussions under a blog post limited to it’s topic, ok? :)

  16. Bianca

    March 14, 2011 at 6:21 am / 

    @David Bjorn , @David Mendack , @Jason Amstrad .

    no no,do not get me wrong here. Unity is the reason why I switched over from UDK to unity,since working with UDK is a real mess !
    I know that Cryengine and UDK are afraid of unity technologies,because of the speed at which unity3d is “evolving”.
    So I was just “wondering” when unity will support fog volumes.
    I hope you guys understand me now more clearly.

  17. Jason Amstrad

    March 14, 2011 at 6:13 am / 

    @Bianca

    You can in fact achieve a similar effect with some programming in unity3d.
    So in UDK you just drag and drop it in,but in unity3d you have to do some programming to achieve a similar effect.

  18. David Mendack

    March 14, 2011 at 6:08 am / 

    @Bianca

    Are you comparing Unity with UDK again ?
    Why don’t you just go ahead and download UDK then,if you want fog volumes so bad ?
    Maybe Unity Technologies will in fact support fog volumes in the future,just wait and you’ll see.
    Be a bit patient,Unity still has to work on the new UI solution and pathfinding solution first.

  19. David Bjorn

    March 14, 2011 at 6:02 am / 

    @Bianca
    Who said that unity3d will “ever” support fog volumes ?

  20. Bianca

    March 14, 2011 at 6:00 am / 

    When will unity support fog volumes ?

  21. Robert Cupisz

    March 14, 2011 at 4:15 am / 

    @Jonathan Czeck: After main GI calculations are done, gathering light for individual probes is quite fast. As I mentioned in the post – it is possible to bake at a higher resolution and then cluster probes to create an adaptively subdivided grid, but that probably tips the trade-off balance away from control and performance towards automatic authoring (and higher bake times and memory consumption).

    @JoeW: Some light transitions will have to be harsh with the ‘two nearest probes’ method, unfortunately. Hopefully with other methods this might be better – but always at a cost ;) As I mention in the post, it’s hard to make automatic placement work well. And it’s even harder if the users should be able to move the probes afterwards, because what should we do when the regenerated probe positions change due to scene changes? Store user modifications and apply as deltas? Discard? Hmm…

    @Laurent: With each of those solutions probes can only be moved before they are baked. I’m thinking about one more possibility though, with which you could move sub-areas of probes (they wouldn’t update their light information of course, just move). You would place a bunch of OBB’s within the scene, each with a uniform grid of probes. These could be also defined per larger prefab, like a building and then instantiated or moved at runtime.

    @Thomas P.: Wouldn’t that be the worst of both worlds? ;) Both the need for manual placement and limitations as to where the probes are placed?

    @Georges Paz: Frostbite2 does use light probes, but using a different placement and interpolation technique. I’m not sure I should discuss it here, though. ;)

  22. Minevr

    March 13, 2011 at 10:52 pm / 

    White light…cool!

  23. Georges Paz

    March 13, 2011 at 8:57 pm / 

    Frostbite2 is doing exactly the same thing (though I’m not sure how they handle the setup of light probes internally) but they aren’t doing what cryEngine3 does to get indirect illumination, instead of, lights probes.
    Keep working hard guys, is looking awesome.

  24. David Bjorn

    March 13, 2011 at 11:54 am / 

    @Manon Seppen
    I have used it,but it is not quite as useful as Kismet.
    I think unity has to come up with a better solution than PlayMaker.

  25. Manon Seppen

    March 12, 2011 at 6:09 am / 

    For those of you who have used the PlayMaker visual scripting plugin for Unity3D,is it a real timesaver ?

  26. Nathali Abbortini

    March 12, 2011 at 6:04 am / 

    @Nathan

    Yes but what “bigger” feature is Robert Cupisz actually talking about ?
    And does it have to do with Beast or does it have to do with something completely different ?

  27. Nathan

    March 12, 2011 at 12:32 am / 

    Wow, very cool! Looking forward to when this is released along with the bigger feature you’re working on. Thanks for sharing! :)

  28. Jason Amstrad

    March 11, 2011 at 1:36 pm / 

    @Robert Cupisz

    You said that you are focusing on a bigger feature.
    What feature are you talking about then ?

  29. Thomas P.

    March 11, 2011 at 8:34 am / 

    Very awesome!! I was hoping to see a few more advanced features in the future and reading your post and watching the video got me really excited as I use beast very frequently. I hope we will be allowed to manually place the probes maybe based on a per 1 unit grid system or such.

  30. Laurent

    March 11, 2011 at 1:51 am / 

    Can the probes be moved?
    This way I could bake probes around buildings, parent them to the building’s prefab and when I instanciate the building, I’d have the proximity effect of diffuse reflection.

    Can you add an intensity slider to each probe, this way setting the slider to -1 would turn the probe into a diffuse shadow caster, or a fake Ambient Occlusion volume caster

  31. JoeW

    March 10, 2011 at 5:11 pm / 

    This looks incredibly promising! Some of the light transitions look a tad harsh, but I also realize this is very “beta”. I’ve been waiting for SH since I’ve used them in other companies and loved the results I was able to achieve. I can’t contribute to the technical discussion – mostly just the artistic side of things. I think a combination of automatic and manual population would be the way to go – perhaps have the system automatically place probes where it thinks they belong, and allow the user to move/add/delete them, as well as *possibly* control the cross fade between them…. but it’s hard to say without using it how to improve it or what would work best.

  32. Jonathan Czeck

    March 10, 2011 at 5:06 pm / 

    How expensive is calculating n+1 probes compared to n probes? I would kind of think a lot of the calculations for the probe are already being performed for regular lightmapping. Maybe not, too. If the difference is negligible, maybe the problem can be divided into some brute force computation of a grid of probes (similar to setting up occlusion culling) and then intelligent reduction of the number of probes using some thresholds and some octree type structure. Processor time is much less expensive than an artist’s time, so we’re all about whatever brute force is required if it saves us some person-time. I don’t want to place light probes if an algorithm can do it for me.

    p.s. I did notice the temporal fading right away in the video and was distracted by it. But I’m picky.

  33. Dude

    March 10, 2011 at 4:32 pm / 

    Ah, I see. Thanks for clarifying!

  34. Robert Cupisz

    March 10, 2011 at 4:10 pm / 

    Even if we considered only direct light (and shadows!), the shapes would be complex enough to make this approach a rather poor fit. But we’re talking about indirect light here – finding the ‘volume’ of this light would impractical and impossible before simulating the light transport. Also interesting light changes occur within the volume, not only on the surface, as you suggest.

  35. Dude

    March 10, 2011 at 10:27 am / 

    Couldn’t you cast rays from each light into the scene, finding the volume each light affects. Then along those edges of these volumes the light effects would change in a way that cannot be interpolated.
    This would give you one volume for each light. Then place probes along those edges on either side. Then group those probes that have similiar lighting or are very close together.
    This of course would have to be redone each time the user makes changes in the scene. To reduce the number of probes, the user could define a threshhold that would have to be crossed in order for two probes to be defined as smiliar and hence be grouped together.

  36. Robert Cupisz

    March 10, 2011 at 4:00 am / 

    @Dude: To handle positions outside of the hull I would still need to find the tetrahedron that’s the best match (dots of it’s normal compared to dots of neighbouring tetrahedra normals) and then project position onto it’s face. In general though extrapolation won’t give a good guess on lighting anyway, so pretty much anything we do there is as good or as bad ;)

    @Niosop: Sorry, we haven’t announced it yet and we try to keep the flexibility of our development by not making people too attached to stuff we haven’t implemented yet ;) As soon as the word is out though, we’ll be open about any internal details of it.

    @Dude: There’s no general way to know that. As I mention in the blog, we could bake a very high resolution grid and then cluster and remove similar probes. This however assumes that you know the density needed to capture ‘all’ interesting changes, which you don’t. Also – there’s actually no need for that ;) Even though changes in lighting are extreme in this test scene, in a game you probably couldn’t tell inaccuracies introduced by under-sampling if the probes are well placed.

    @Wes: Not yet, but I will. The idea is to make this as low overhead as possible (after all, it’s just a fancier ambient light 8) so that it runs on mobiles as well.

    @Fun: That stuff doesn’t really have anything to do with what I discussed here ;)

    @Wahooney: Nice :) I know that Valve’s paper about ambient cubes. Somehow I prefer higher accuracy of 3 band SH that’s still easy to interpolate. These probes are a bake once affair, just like our current lightmaps solution. But we should probably think if we should at least implement blending between sets of lightmaps/probes for different lighting setups, hmm…

    @Haru: No way! This will be a free update to Unity!

    @Georges: Yeap, there’s a ton of games using SH light probes, including the awesome Killzone 2, Battlefield 3 and Milo and Kate. The trick is that we have to come up with something super easy to use, fast and small enough for mobiles and working for any conceivable game people make in Unity :]

  37. Georges Paz

    March 10, 2011 at 1:33 am / 

    Pretty awesome those lights probes!
    Vision engine and UDK does a similar technique to lit dynamic meshes with static lighting (baked) using multiples grids.
    Keep working on it! :)
    Cheers,

  38. Haru

    March 9, 2011 at 10:27 pm / 

    Ship It! looks amazing! ready to buy, put it on the asset store for 50$ and I’ll buy it right now! Nice job, looks very promising for many scenarios

  39. Wahooney

    March 9, 2011 at 9:54 pm / 

    I experimented with light probes recently as well, my approach is similar to the one you show where the artist places probes in the scene, I also encountered the interpolation method, also chose temporal first and my next step was Delaunay Tetrahedons… My only question is… How did you hijack my webcam and or brain? ;)

    Seriously though, my method was different, I baked low resolution (4×4) cube maps at each probe position, sampled them down to six pixels (one for each side) and interpolated between those pixels in the shader (similar to Valve’s paper on lighting in the source engine). Each probe could also resample it’s environment over time or on position changes.

    It’s cool to see a feature like this being worked on officially though :) Will it handle truly dynamic lighting (day/night cycles, etc.) or is it a bake once affair?

  40. Fun

    March 9, 2011 at 12:51 pm / 

    Someone did something similar on Youtube :
    http://www.youtube.com/watch?v=-Pp9a6F2hzg

    I suspect its similar technique, if not, I would like to see that realtime global illumination method implemented in next version of Unity :D

  41. Mikael Vesavuori

    March 9, 2011 at 12:36 pm / 

    Excellent post. This is the kind of interesting details I’d hope to see more from modern businesses: discussing their upcoming features (and the process of solving/making them) and receive community input. Excellent, again.

  42. God at play

    March 9, 2011 at 11:28 am / 

    Really excited to see this tech being worked on :)

    Can’t wait to get my hands on it!!

  43. Wes McDermott

    March 9, 2011 at 10:50 am / 

    Absolutely amazing! I’ve been hoping for this feature with the inclusion of Beast. Well done! I really look forward to your progress. Have you tested this technique on mobile platforms?

  44. Dude

    March 9, 2011 at 10:04 am / 

    Is there some way of finding out where “intersting” lighting changes happen? Then probes could automatically be placed along those lines.
    It appears in the video, that most lighting changes are actually very extreme. If the probes are far apart (one in shadow, one in the sun), then the character moving from the one probe to the next would only very gradually change his lighting properties (can partly be seen in the video).
    So along these extreme changes there should be probes nearby on either side.

    Maybe this could be automated?

  45. Niosop

    March 9, 2011 at 9:58 am / 

    This is great. I’ve been hoping you guys would start to take advantage of some of the more advanced features of Beast. Are you allowed to say what the main project you are working on is?

  46. Dude

    March 9, 2011 at 9:56 am / 

    I’m currently hearing a lecture on computer graphics, which is why I found this article very interesting!

    I would also think that Tetrahedralisation would be the best approach, mainly because of the use of barycentric coordinates. However that may only be the case, because I am most familiar with them.
    What would happen, if a character was outside of all tetrahedons? Would you just project the position onto the nearest edge and interpolate on that edge?

Comments are closed.