Raytracing was merged into Dev-Rendering!

That’s ahead of my expectations, well done.

If processor speeds keep doublingish every 18 months you won’t see real-time path traced caustics for a decade if not longer. Someone may figure out a nifty trick to fake it though.

any information about path tracer will be integrated to sequencer? with sample settings etc also denoiser ofc just like renderers in DCC apps. Would be super cool; native offline renderer in Unreal! :open_mouth:

You could jack up the samples and bounces for offline renders. Other than that I doubt any extra integration is needed, people have already demoed it working.

Not sure where you getting a decade from, but we just need the ray-traced translucency to work correctly and function with GI, and we will get natural caustics. It is already being done in RTX demos.

Just to show you, here is atomic heart, it traces caustics from a light source (believe this is also a UE4 project):

Those caustics are fake, you can see artifacts and they’re still there when RTX is switched off :slight_smile: Nvidia has shown ray-traced caustics though:

Very niiiice!

EDIT: Also had the realization that I had mis-read r.raytracing.translucency,maxrefractionrays as r.raytracing.translucency,maxrefractionbounces… whoops! Looks excellent now.

There’s a lot of “and that just happens” in that video. It’s not like that in reality, there’s a lot of optimization behind every task building the scene, it’s not just a single ray simulation to get the entire result.

Those caustics don’t look real, they’re not actually path traced. I’d say it’s a projected light function given how it looks the same at all surface distances. The real test is to get the correct interference pattern being cast through a solid piece of transmissive material like a vase:

http://1.bp.blogspot.com/_Eiwce13X738/SFOCJU52ftI/AAAAAAAAC3M/w9BDDd9yJlg/s400/P1010005.JPG

So for something like water where you fully don’t care if a bit of light actually came from a particular angled surface you could probably do an even better job with a gerstner wave function that takes into account surface distance to adjust the focal depth of the projection. Heck you could sync your water to it. That’s what I mean by faking it good. That’s not what’s happening in the video, but I’m sure it’s coming. But that won’t get you the rendered scene above.

Having worked with path tracing and real caustics before I can promise that it’s the most resource intensive operation you can undertake. Even with four Titan X cards it took me a couple of days to render caustics into lightmass for a tiny scene like the one above using GPU accelerated path tracing. When we get that level of computation in real-time we’ll be making near perfect simulations.

It’s nice but it still looks like someone wrote a caustic projection light function into the ray shader of the glass material. I think that’s the right way to do it for this gen, but to really get a nice feeling caustic it’s going to have to spawn dozens of rays rather than one or two.

i said “PATH TRACER”, read more carefuly. Integration is needed for what i meant. You can’t use it with sequencer right now.

The only path tracing in UE will be the GI technique I think. Are you asking them to implement an entirely new process just for sequencer renders? You can export your scene animation frame by frame and render it in one of the commercial path tracers more effectively.

I was working on a third party pathtracing integration for UE but I don’t know what’s happened to that since I left.

you have no idea what im talking about… there is TWO new modes. one is ray tracing other is path tracing. look at release notes of 4.22 preview please.

Aha, I see what you’re talking about. It’s for reference renders. Give it a go and see what kind of results you can get out of it. Once it’s denoised it might be good as a reference for your scenes, but it might not be fast enough to use for cinematics (nobody would be happier than me if it turned out to be better than the competition though). In most cases path tracing takes a lot longer to get the same visual quality as hybrid solutions, even though it ends up being more realistic and accurate.

Im really not into game making. we using unrel for Viz, So im ok even with 0.1 fps hehe. Because i dont need real-time cinematics. Just exporting frame by frame. It would just good to have something like Octane in unreal. That’s all.

Again, caustics are not some kind of special functionality. They are a product of:

  • Translucent tracing (with GI and refraction/reflection support)
  • Supportive geometry modeled correctly

That’s it! As far as light-mass is concerned in your case, its speed is highly settings dependent (not just scene size) and not a good measure of unbiased monte-carlo tracing. Check out: three.js PathTracing Renderer - Bi-Directional PathTracing Classic Test Scene

This path tracer is implemented in webgl, runs on a standard GPU and will render a full GI’ed scene in less than a minute. It’s also run-time interactive. There are other demos on that site too, they are worth a look.

Having built a bunch of path tracers from scratch, CPU and GPU, using both compute and raster, I am pretty confident we will see a GI only mode of RT in use within 1-2 years. Where we exclude direct lighting from other passes like skylight/emissive, point, rect and spot light and instead inject directly into the GI by default.

This one is really cool actually: three.js PathTracing Renderer - Geometry Showcase

It looks nice but it’s not path traced caustics, I can see a few things wrong with that scene.

If you’ve made path tracers then you understand that accurate caustics require a colossal number of samples. A good caustic result requires exponentially more samples than a good GI result. The nature of caustics also means denoising and still getting an accurate result might be difficult, even for machine learning.

There is also quite a big difference between GI and caustics. Presenting a good GI result and saying that it means caustics are feasible shows there’s a lack of understanding somewhere.

Scene size is relevant in my case as we were baking the results into lightmass, not the screen.

There’s an overwhelming trend amongst people who know a bit about ray tracing to think that you just trace a ray and bam! You have a perfectly realistic result. These are the same people who don’t understand the staggering difference in computation between path tracing and ray tracing. The same people who don’t understand that all current and future generations of real time graphics will always be a hybrid solution on some level, therefore not a complete ray tracing or path tracing model.

Everyone needs to step back and remember that what we have right now is a scanline diffuse layer with an independent ray traced lighting pass, an independent ray traced reflection pass and an independent GI pass that is so simplified and limited that it’s no longer path tracing in the traditional sense. There will be limits to what it can do.

It is bi-directional path tracing. There is no cheating going on, it is still unbiased (or at least nothing about the technique should introduce bias theoretically) path tracing, just with rays being shot from the perspective of the light as well as from the camera. The same general technique was used in Finding Dory for their water caustics. And getting decent caustics from smooth convex shapes doesn’t really require all that many samples, because the paths from light to surface are not highly divergent, so the probability densities are spatially coherent and easy to approximate statistically.

Odd caustic placement. Normally with a translucent surface caustics face away from the light source rather than in random directions. ¯_(ツ)_/¯

So one solution for convex shapes, one for concave? Starting to sound a bit like a hybrid renderer.

Not at all, convex perfectly smooth solids like spheres and cylinders just naturally form very simple caustic patterns, with no weird outliers making unpredictable noise. No special case math, that’s just the result of the rendering equation. You give it a reasonably low number of samples and you still get a good result.

How well is that going to work in the context of a game engine though? This isn’t a real world usage scenario you’re describing, game artists won’t be limited to spheres and cylinders (which have to be the least common use case).

You might as well render the caustics for a lot of different shapes and different light positions and use it to train a neural network. I’d get behind that. You could volumetric renders or light paths for small patches of geometry and then train the network to stitch the results for each geometry patch together.

I think I found my next project.