Realtime Dynamic GI + Reflections + AO + Emissive - AHR

Oh neat! I was fiddling around with something like that with signed distance fields, essentially trying to store directionality for distance to nearest, but it increased memory size by a lot :frowning: Then again maybe I’m just doing it in a dumb way.

What’s the performance increase you’ve seen with your thing?

For the visual studio thing, if cloned the repo correctly from the page (following Epic’s instructions), you should have everything setup so that it works automatically.
You need to create two post process volumes, one set it as the bounds (check DefinesAHRBounds). That represents the extents of your scene. On the other one just mess with the Approximate Hybrid Raytracing tab.
You could check the video I have on the first post, but it’s quite outdated. Still, it should help to get the basics down. I really need to do a new one, but with studies and all it’s being pushed back.

That’s whats neat, the memory increase is small, cause the mip level map (really, it sounds wrong. If someone can suggest a more catchy name, I’m all ears) is just 8 bits per component (could even cut it to 4, but think the overhead on tracing won’t be worth it), and the scene volumes mip map are binary, so only one bit per voxel (currently it’s using a whole byte, but that’s because it was the fastest way to implement it, will change it in the next few days).
Performance is much more constant now, because just a few samples are enough, and it scales better with the voxel size decreasing. Overall I get about 2 - 3 times faster tracing, depending on settings. I really need to come up with a standardized bench that I can distribute and use to test improvements and performance on different computers.

Also, one quick note, but an important one. Because of an old bug regarding render targets, that seems to have become even worst with 4.11, don’t select a camera on the editor. It will run for a few seconds and the the whole editor will crash because it runs out of memory allocating render targets.
Yes, I know I have to fix , but for now I’m just warning people

I will take a look at epic instructions about . I was doing it downloading zip and then making the process again, but it’s not a problem.
Thank you, I have found now your “AHR & UE4 - Alpha” vídeo, I didn’t saw it haha, I will start studing the unreal engine with AHR.
I understand that you have studies and a live, there’s no problem I can wait patiently for more progress.
I work as architect and I don’t have a lot of time too but I still following because my intention is to finish using to rendering vídeos.
I am currently using Lumion but I understand that I can get videos with better interior lighting with application and without having to Unwrap UVs for light baking.

Hi everyone, things have been quite around here. Been quite short on time latetly, so that’s why.
Anyway, just pushed a few changes to the blur code that help to make it a tad faster.
My test scene now it’s on consistent 60 FPS at 1440p (it averages a bit more) when packed. And by turning the settings down, managed to run it on an i3 4000m iGPU (an HD4600) at about 20 fps, so not bad there(even without AHR, UE4 runs quite slow on the intel graphics I tried, not really sure why)
Anyway, it was just that quick notice

So, happened today :smiley:

I’m making a scene to showcase AHR, along with tips and tricks (i’m trying to make what is the best posible case for AHR).
Not much done yet though, started today and spent most of the day with 3d scans (killing two birds with one stone actually, been meaning to try photogrametry for a while now. The white stones are scaned ones)
Here’s the above gif at better res (it says ~80 FPS, but get like 90 when playing in editor. Still not much in the scene though) for everyone to see. Was just playing with the sun, but thought it looked cool :smiley: Only the sun there, the blue light in the back is actually a emissive cylinder.

Hey, do you have multiple bounces yet? Because That would be great! Also are there caustics (relfection,refraction) at all? If not is it possable for you to implement ? lastly I have an ides for path tracing in ue4. it might be helpful…oh, yeah I made a pbr Path tracer.

Good news Santiago, looks great :slight_smile:

No mutliple bounces yet, but it’s on the plan. About caustics, not, and it could be implemented but it would take a few changes (by caustics I assume you mean Caustic (optics) - Wikipedia , cause opaque surface reflection is supported, and refraction, along with transparent surfaces, could be implemented, but with the same changes as caustics). About your ideas, great! Send them by pm

Thanks! Will try to update frequently

hey , great progress! always keeping an eye on your solution as our game would immensely benefit from GI. (scifi interior with completely dynamic lighting, high focus on turning power/lights on/off, closing and opening shutters to let light in, etc). currently we do manually by placing point lights for gi and linking those to the shutter trigger so that they dim down when we close the shutters like : Imgur: The magic of the Internet . pretty labor intense setups. and 1 bounce would be more than we need actually. But I`m curious, are PBR materials working properly in your setup? You probably don’t have automatically updating reflection captures so I guess you have to rely on ssr for reflections? (which often kills the reflections entirely, rendering PBR materials very mute). would be great to have some documentation about whats working and what the current state of restrictions are.

looking forward to seeing more!

How do I pm you?

Oh man! is !!! Can`t wait to see in action! EPIC please please please, Unreal 4 should have real-time lighting on its board :slight_smile:

AHR lets you have dynamic reflections without SSR or updating reflection captures. :slight_smile:

With a few caveats.
-roughness is not taking into account for tracing (yet. It IS taken into account on the composite stage, so that rough surfaces don’t reflect and smooth surfaces do) so all reflections are sharp. Normal maps are taken into account, and they are necessary to break reflections and achieve good results, for the following point.
-reflections are on the voxels, and they aren’t filtered, so you get blocky reflections (think minecraft). Thats why having a good deal of detail on the normals help to break the reflections so that the blocks are not noticeable.
-only the lit parts are visible on the reflections, the others are black (there’s a slider to control the amount of ambient color in the voxels, but its a hack, so it needs to be used with care)
-there’s no slider to control the intensity of reflections (but its something easy to add, so will implement it soon)
If you can live with all that, then yeah, you get nice reflections. In practice is not that bad as it sounds, will post some screens later(there are some back in the thread).
I will implement proper roughness later, probably with the same method that DICE published (they use it for SSR, but its the same for reflections). Still, the blocks cant be “fixed” so its not really fit for mirrors and the like.

Just go to my profile and send it from there

Any eta for a release on the marketplace??

Another thing I came across, constrain ray sample directions to keep coherency without (rather with a lot less) unwanted artifacts:

Paper: Siggraph 2016: Cache-Friendly Micro-Jittered Sampling - YouTube

Keeping coherency within the warp/wavefront for a single bounce could allow a lot more rays to be traced without losing a lot of performance.

is a whole branch of UE4. It can’t be done as a . If anything it will be merged into UE4 or remain as its own build.

Well, I got a little off track with the demo, testing a new idea I had, so it’s gonna take a bit longer.
Don’t have much to show for now, but the idea is to use the prev frame blured targets to do the same trick that I do on the composite stage, but now use that (rather crude) approximation of the scene lighting to generate the rays closer to where it’s more likely to be light (kinda like importance sampling, using the light approximation as the PDF).
That way you generate rays that are potentially closer to light.
I’m still playing with it, but preliminary results are quite good
http://i.imgur.com/s97Azjk.png
Left is normal, right is the new sampling. 8 rays per pixel

That looks really interesting! Need to check it out further. Not sure how useful it can be to though, cause it seems to provide improvements with huge ray count (and I’m using 4 now…). Still, worth taking a read, and seems straightforward to implement. Nice catch!

You took the words out of my mouth :smiley: Was about to say the same thing

Sure but that’s the point, all GPUs work on the principle of warps/wavefronts, I.E. (in the case of AMD, I think Nvidia’s work groups are smaller but that isn’t totally relevant) all threads are grouped into 64 grouped bunches. If you’re shooting off rays in random directions on will go left, taking the entire warp with it, hit something, return/ Then the entire warp will wait until the next ray goes off to the right, hits something returns what it hit.

But if you can group rays together, as in they all go left, then they’re all (potentially) going to hit the same thing, and return the same thing, or at least a lot more of them will. You’ll thus, in theory, be vastly reducing cost of multiple rays. The ideal would be to use UE4s tiled lighting (where each tile is 8x8 pixels to match the above warp/wavefront structure) to bin ray bundles. You’d isolate pixel clusters (actually clustered lighting would be much better for ) close enough to each other to matter, shoot rays roughly parallel to each other (as above) from these clusters, and you could theoretically use a single warp/wavefront to shoot an entire bundle of rays. Parallelized raytracing, much, much faster than the random rays done now and more in line with how GPUs work!

But you’d need to be quite clever to get it working right of course. Each pixel is going to have a different normal, and you’d have to figure out how to cover the entire hemisphere while doing near parallel bundles at the same time.

Yes, I know that, you increase cache hit and therefore performance, but if you look at the paper, they never talk with less than 16 rays (or samples) per pixel, and the gains start to show as the number of rays increase (which makes sense, as few rays involve few reads you are more likely to hit the cache even if you have low coherence).
Also assumes you are using different random numbers per pixel, but I (and most SSAO or similar implementations out there) use a small random texture tiled on the screen. Not sure how it will work with that in mind.
Still, it’s an easy change to do, so will try it eventually.
Cache coherence is always welcome :smiley: