Global Illumination alternatives

SVO is old news. It was a research project that at the time people jumped on because it meant things were getting near to realtime raytracing, it’s not where we should be aiming. look at this for example http://www.youtube.com/watch?v=1pjupG1YPgE&list=UUEsdpGysd9FNz4rRSeLvowQ (should also add he has a download example of his supperb work on one of his video thread). nearly six months back, it’s just not right for gpu designs right now unless you have a nuclear sub powering your machine, once it is, it’s redundant anyway. When GPU’s can efficiently do such things path tracing will be the best way of doing this with down sampling correction filters (except maybe hair, cone tracing for such things makes sense even with offline rendering).

One of the best realtime idea’s ive seen is Morgan McGuire’s realtime GI system Lighting Deep G-Buffers, after speaking with him 4-5 months back this should appear very soon, his work on transparent depth solving is also superb (his power is in his beard, the only thing i dislike is the fact he works for Nvidia which means if we let them it will just become another licensed hate attack against the overall game industry).

Ill start a thread to show the many many great works that are aimed at solving this issue, let people decide whats the best direction forward short term & long term.

It’s a step in the right direction, and something I’m personally looking into for a project later with other things, but it’s only that it reduces memory usage a lot. The actual performance still isn’t up there, in fact what they get in the paper is just plain horrible.

Voxel cone tracing as a general solution may work one day, and may already be working, on the PS4 only, for really tight corridor environments, in Capcom’s upcoming “Deep Down” But right now as a general solution it’s not workable, at least in a power envelope where you could run an entire rest of a game on a reasonable set of platforms.

The deep g-buffers thing is another completely useless idea, anything screenspace for “global” illumination, no matter how many hacks you throw at it, is not the way to go. Temporal stability of global illumination is pretty much the entire idea of “Global” illumination to begin with.

Temporal stability if you read the white paper is what this technique does well. It’s a single pass system (basic 1 bounce) that uses past frame and predictive evaluation based on velocity of geometry per pixel to be able to use data to represent multi bounce indirect with no extra cost. Yeah it’s not going to be perfect because you don’t have all scene data for eval (unlike offline renderers that have to have all geo) but has enough through temporal remapping and view point guard banding (e.g rendering view at larger size by 10-20% for example) based on the fact human perception can’t correctly interpret indirect lighting and reflection beyond our viewing point per frame. By evaluation of this extra perception data (even if not from a 360 degree point of view is 90% of the time enough for realistic perception of lighting and reflection) you get a near 99% acceptance of the result.

Also you should look into what i posted for devs to look at not long ago, SVOGI even with intels steps are still not great, ive been banging on about SV DAGS for months. Check this Budd Compact Precomputed Voxelized Shadows . Great papers http://www.cse.chalmers.se/~d00sint/

The issue I see here, that this is screen space. Imagine what will happen when you turn back to light source. Lighting will change drastically, because there not enough information on screen.

Though technique seems interesting to solving issues with translucency in deferred shading.

this is impressive!

https://www.youtube.com/watch?v=G9isGEI6Kfc

It does have a lot of good papers! I’m really impressed by “High Resolution Sparse Voxel DAGs” the proof is in the pudding as they say and the images in that paper look very close to offline rendering. Epic rendering engineers should probably look at it. It’s a pity they don’t have any videos to go along with their paper.

[QUOTE=gabrielefx;70363]
this is impressive!

It is impressive, but it doesn’t help Epic since there are no papers to read :stuck_out_tongue:

I’m sure they already did, because I posted those papers early in the beta.

There is just to few graphics engineers at Epic to handle all this stuff. And there are priorities. So if you guys are really good apply to work at Epic. They can use your help :smiley:

That’s actually what I’m toying around with (vacation aside). Encoding lighting information in a layered reflective shadow map (recent paper) for individual lights, and then encoding the standard geometry/color information into a set of 3d texture blocks, which being coherent in memory are a lot faster than sparse octrees, and reducing then memory footprint by storing the unlit voxel information into a directed acylic graph. Combined with some more recent hardware features allowing 3d textures to use null pointers for, in this case, empty voxels the resulting memory hit should be very low, while the structure should be very fast.

It’s half an idea with pieces of broken code right now though. I still haven’t put out my Pixar ambient hack demo, which is another huge time saver (no need for double light bounce to still get information). And that’s mostly working. Maybe someone with a lot more time than me at Epic will read this and actually accomplish something though.

Is it just me or does there have a little whiff of ■■■■■■■■ about Frenetic Pony’s statements. You clearly don’t know your *** hole from your elbow, don’t tell porky’s Pony boy!

Funny story, the source code for Deep G-Buffers has just been released!

http://graphics.cs.williams.edu/papers/DeepGBuffer14/

Wonder if anyone would like to take a crack at implementing this.

Is it consensus that other techniques (such as path tracing) will outperform voxel cone tracing once hardware has advanced far enough to handle SVOs efficiently? If so, what is the key problem? Is it that voxel data structures are not well suited for GPUs?

Yeah ive had a quick chat with Morgan about the work, Idea’s for speed increases & quality. Morgans a top boy, he always share’s the good ****!. If Epic staff don’t port this Opengl code over to UE4 Ill take a look. Check the video though (which is just a tech demo, doesnt have PBR of any of the other bells and whistles UE4 provides) mixed with UE4 this could be a very nice screen space GI system that could well reach near to photoreal results with the right work.

Sparse voxel octrees aren’t good for GPU’s simply because they don’t map linearly to memory, causing all the nice parallelisation that GPUs are good at to go out the window. Which is why many people looking at voxel cone tracing have switched to uniform 3d textures, which is a very nice speed up but can take up a lot more ram if you aren’t very, very clever.

Bleh, just another “look at what we can do in Sponza!” type demo. Highly dependent on view direction and scene complexity isn’t something that’s a good solution.

Screenspace tracing stuff was invented because you already had to do all the heavy lifting of rendering everything there to begin with, and re-using it relatively cheaply made sense. This is just a wrong headed approach of taking the “cheap” portion of 2.5d screenspace tracing and extending by adding yet more of the brute force portion of rendering everything to screenspace to begin with.

Besides, notice A. That it’s recorded at just 30fps on a Titan, which is really a nice bottom performance requirement to have. B. during the global illumination demo they never do something an actual player would do, which is just turn around and face away from the light. Because then all the lighting would disappear! Really I’d not bother with this.

I’m using only a single GTX 480, and it runs near 30 fps on it. I don’t see why it wouldn’t be faster on say, a GTX 780.

Yeah Pony Boy is full of *****, The video was recorded at 30fps. It runs at better than 30fps, just because screen space doesn’t mean lighting isnt calculated. It’s the same as lighting with no GI, if you have lights within the screen frustum that effect what you see, the GI is still effective. Also consider this isnt the 1 pass version in the paper, their are many ways to speed this up.

Better than lies about methods that Pony boy goes on about, what a ****! :slight_smile:

PS the attrium scene with tables and chairs is a very high poly scene, it’s not a hack, and nothing is baked. You keep going on about your code your working on, show us your engines examples. If you want a look at mine just search Kingbadger3d on youtube and look for NexusGL.

You should try implementing it, it’d be the most epic project ever! :smiley:

Oh fine go ahead and implement, or just understand what the paper is about at all. You don’t have to listen to me of course, I’m just trying to save someone time and effort. Or you could just download the demo if you are incapable of visualising what this technique actually does, and see what it does for yourself.

I’ve been told by the moderators to ignore you, again you keep going on about the Code your working on? any video’s? In fact anything worth a squirt of ****? Show us your right. If not just get to the back of the bus.

The Deep G-Buffer GI seems like it could be a nice addition, it retains some properties of SSDO, performs very good AO and it’s relatively cheap.

It’s still not a real solution by itself I’m afraid. It’s screen space, and even with all the tricks they used it doesn’t take into account the actual volume of light scattered into the whole scene. All light behind camera is not scattering anything. Which is fine BTW, as a real preactical solution could be to use a coarse volume sampling to get interactions from the areas outside the camera frustum and blend them over with DGBGI when it gets into the view.
Sort of like the reflection system we have right now, where samples from the reflection probes gets blended with SSR.

Still I’d love to see strides in implementing something more elegant, ie. something I don’t have to massage placing proxy probes carefully to integrate a screen space effect. Think about procedural levels for instance. You will not be able to art direct any volume you would need to get lighting info. If you have some corridors, maybe fire behind the corner, you’re not gonna see it. Then it suddenly pops in and scatters, could be a nice effect but still not really realistic. They shomehow solved this using 2 or 3 levels of G-buffers, but the placement is very tight and doesn’t allow much. And still it doesn’t take into account the fire you already passed and it’s now in your back.

Please calm down. After all this is an Epic Feedback forum. If you want to fight take it to another thread.
Pony last made an intelligent statement, that applies to every company in the world: investment cost/time vs return. In his eyes this tech is not really something to invest time on, and the comment he made are pretty legit.
It’s good tech, mind me, so you comments are pretty legit too. It only depends on how much time and resources you have to invest.
If he’s developing something you should be happy instead of bashing him, since that’s going to be distributed at a certain time and everyone could benefit from this. If he doesn’t deliver, well, who cares? you already have the tools to make a beautiful game, and those comes from Epic, not him.

That being said, is there anyone willing to port the released source code of DGBGI to UE?
I’m just an artist with enough technical knowledge, the best i could do is make you a statue in 3d :wink:

I have downloaded the code from the link, and have had a quick look at it, I plan to convert it across at some point, just not sure yet, and if I am accepted into the NVidia HairWorks beta program, it may be even longer. So if no-one has come up with anything better by that point then I will convert.