An idea for a new way of calculating shadows

So a few years ago I had this idea for calculating shadows in a scene, which I wasn’t sure have been tried before.
I shelved the idea because I was too busy with other stuff. Now that I’m getting into the Unreal engine I’m kind of keen on implementing this in the engine.
I’m not a rendering wiz but I do have a copy of “Computer Graphics: Principles and Practices” and I have written a raytracer as a kid. I wanted to pose this question to the larger community of rendering engine geniuses, whether it’s been tried before. Or if it sounds like a decent idea.

Here’s my idea. You take a scene, like this structure of a well.

Then you run a program on the scene that samples the geometry at evenly distributed positions.

At each sample, do a hidden surface removal of all the faces that lie in a hemisphere to the normal of the point on the surface of the sample.

Here comes the tricky part. We then construct a polygon that represents the sillhouette as seen from the sample position. That is, the line that delineates the geometry of the scene, and the “sky” if you will.

Here’s an example of a sample taken at the bottom of the well. It would only see the faces made up of the vertices 1, 2, 3, 4 and the corresponding vertices at the bottom of the well. The sillhouette would then be made up of the edges of the vertices that border the “sky”.

Here’s another example. The sample at “x” sees the corners of the cube and constructs a sillhouette from that.

The program that crunches the scene will then save these samples in what I call a sillhouette map. A file where each sample gets a uv-coordinate so we can easily make a surface lookat into the sillhouette map later. Each sample’s sillhouette is made up of vertices of polar coordinates - that is two components, a radial angle and an azimuth.

We can then, at render time, for each spot on the surface we want to determine whether it lies in shadow or not, make a lookup in the sillhouette map. If we’re between samples, we create a new sillhouette by averaging the nearby sillhouettes. We take the polar coordinates and project them on an infinite hemisphere. It’s fairly cheap to determine whether the incident light angle lies within the the sillhouette or not. If it falls above the horizon of the sillhouette, determine it to be lit, otherwize not.

We don’t need to rerun the process that calculates sillhouettes as we move the light around the renderer. I also don’t think we need to recalculate the sillhouette map when we apply affine transforms to the scene - such as translation, scale and rotate. But of course, if things move around the scene, we can’t use this.

One of the things I think this can be useful for is when you have something like terrain, which has a gazillion polygons. For a spot in the valley of the terrain, the sillhouette is just the polygons at the rim of the mountains.

In fact, since it’s just polygons, there’s tons of great algorithms for simplifying them. We could easily reduce the number of vertices on a ridge on the terrain based on the distance to the sample like this.

Another use I see is generating soft shadows. This might be more of a non-realtime renderer thing, but here you would supersample the sillhouette map, then jitter the samples and average them. For parts of the surface in the penumbra you’d get some samples falling in shadow and some not.

shadows.jpg

In closing. I had trouble finding a good venue where I could discuss this idea of mine. Maybe this isn’t the right place for it, but I wanted to get some input from you rendering wizards on whether this is a thing people do use, or if sounds like a terrible idea.