Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When i was ray tracing 15+ years ago it occurred to me that lights belong in an acceleration structure similar to geometry. It made querying lights that may be relevant for a given x,y,z point in the scene fast and easy. This article covers a bit more that just that, but the concept is similar.


That's tricky, though -- if you just query the acceleration structure for the nearest lights, you might discard some that are important but far away, and that introduces bias into the results.

The unbiased thing would be to use all the lights, but use some kind of weighted random sampling where you pick nearby lights most of the time, but occasionally you pick one that's far away and give it's results a higher weight to counteract that it's not chosen very often.

In a lot of contexts, though, I can agree that the pragmatic choice might be just to cull the number of lights you have to deal with by defining them to have a maximum range, and if you store them in an acceleration structure it'll be easy to query just the ones that matter.


I think "pragmatic choice" is the keyword here. When working on a general renderer for a general game engine, lighting is a much tougher problem than when working on a specialized renderer for a specialized game engine, where an acceleration structure on the lights easily solves most of the problem IME.


I guess the approach you're looking for to make sure that relevant light sources aren't ignored is covered by the ideas in the clustered deferred shading or clustered forward shading algorithms: you divide your view frustum into a 3d grid and for each light source you rasterize its sphere of influence (up to an arbitrary cutoff) into that grid. For each grid cell, you keep a list of relevant lights. For each shading point, you look up its grid cell and only shade based on the lights that got rasterized into it. Naturally, you have to decide on an arbitrary cutoff radius for each light source which introduces bias.


>> That's tricky, though -- if you just query the acceleration structure for the nearest lights, you might discard some that are important but far away, and that introduces bias into the results.

Agreed. A wide area light can be handled, but 1000 far away lights my system would ignore but they might actually be better modeled as ambient. Bit hey, I was ray tracing a maze with hundreds of lights at 10-20 fps on a CPU ;-)


I wonder if it would be possible to store the lights in a 4d spatial data structure with the fourth dimension being intensity, and then find the k nearest neighbors as defined by something akin to the minkowski metric?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: