Adaptive Path Tracing - Excessive Sampling Despite Converged Pixels

We’re currently evaluating the Adaptive Path Tracing feature in UE 5.5.4, but we’ve encountered behavior that appears inconsistent with the expected convergence logic.

In our tests, Adaptive Sampling continues to add samples even after pixels are marked as converged using r.PathTracing.AdaptiveSampling.Visualize=1. This leads to excessive sampling and longer render times, even in areas where little or no visual change is occurring.

Test Details:

  • Samples per pixel: 12,000
  • r.PathTracing.AdaptiveSampling.FilterWidth= 1.3
  • Tested r.PathTracing.AdaptiveSampling.ErrorTreshold values:
    • 0.005
    • 0.0005
    • 0.00001
    • 0.000001

Once a pixel is considered converged, the renderer should stop adding excessive samples, or limit the number of additional samples (e.g. no more than ~1000 extra spp per pixel) even if a high global

SamplesPerPixel value is set (e.g. 100,000).

Here are our questions:

  1. What is the exact relationship between ErrorTreshold and how convergence is determined internally?
  2. Is there a way to prevent additional sampling once convergence is reached, or at least cap the number of oversamples?
  3. Is the convergence logic considering pre-tonemapped values or some other metric?

We understand some level of oversampling may be necessary to avoid false convergence, but the current behavior seems overly conservative.

Thanks in advance for your help - we’re hoping this is something tunable or improvable, and we’d love any additional insights you can provide!

Best regards,

Mateusz

The short answer is that the adaptive sampling feature in the path tracer is still experimental precisely for the reasons you found -- it can be a bit hard to control in practice.

The samples per pixel you set acts as the global maximum. No pixel will ever get more than that number of samples. The error threshold is meant to be a measure of how much noise is allowed before the sampling process stops. Setting a higher value will let sampling stop earlier, while smaller values will require a very low noise level.

The tricky aspect of this is that the perception of noise is not perfectly correlated with the error threshold, so in practice it can be tricky to pick a good value as you’ve found.

To answer your specific questions directly:

1) The error threshold is tied to variance after tone mapping. The exact tone curve used here is just an approximation of the real one to make it cheaper, but the basic idea is that the S-Curve shape of the tone curve will help de-emphasize both dark regions and overly bright regions while focusing the attention on area that are changing quickly. The variance is estimated at multiple scales, and we require higher levels (blurrier, affecting more pixels) to pass the threshold before looking at finer details. This helps avoid marking a pixel as “done” too early. You can visualize these variance maps with the cvar “r.PathTracing.AdaptiveSampling.Visualize” and values 3 through 7

2) No, when using the visualize cvar at value of 1, areas that emerge from the heatmap are the ones considered “done” and won’t be sampled again. In rare cases, a very bright firefly could “re-enable” a few pixels around it temporarily, but that tends to be short-lived. If you reach your max number of samples without the heatmap fully going away, it indicates that not enough samples could be taken to get below the error threshold.

3) Like I mentioned above, we’re using an approximate tone curve to try and roughly predict the amount of error. One thing worth mentioning is that the adaptive sampling works best with fixed exposure settings. If the exposure is being changed automatically, this can create a feedback loop with the adaptive error metric and cause it to get confused. Even more confusing, the heatmap visualization can also play into how exposure gets picked up (since that runs as a post-process and sees the heatmap as real pixel info).

If you are able to share your specific scene, it would be interesting to study where the noise here is coming from and use it as a good test case to see how we can improve adaptive sampling further.

Another limitation worth mentioning here is that the adaptive sampling does not work well with MRQ’s reference motion blur feature. Because MRQ is only able to sample time foward, it will not work well with adaptive sampling early termination of pixels. This is another reason why we’re still treating the adaptive sampler as experimental for now.

Hope this helps!

Hi Chris!

Thanks for the detailed explanation—this clears up a lot. The information about variance being calculated after an approximate tone mapping curve, and the role of multi-scale error evaluation, is especially helpful.

I’d be happy to share the test project for further investigation: MyAirBridge.com | Send or share big files up to 20 GiB for free

Thanks again for your support—we really appreciate the transparency around the feature’s current state.

Best regards,

Mateusz

Thanks for the project, I took a look and the adaptive metric seems to be more or less working as expected.

The main thing to keep in mind about the current implementation is that its not trying to add more samples, just taking away samples from areas that are already “good enough”. Here “good enough” means passing whatever error threshold you set which could actually still be somewhat noisy (but should denoise well as the noise is expected to be more or less uniform).

You can see the speedup by looking at the render time per sample with `stat unitgraph` for example. Without adaptive the performance is more or less flat. With adaptive on, using the default threshold, it goes from processing all pixels to quickly working on a smaller section of the image (and going much faster). However this also exposes a limitation where if the noisy “active” area is quite small, you will be mostly limited by the per-frame overhead (the amount of actual path tracing work will be small). So its quite tricky to predict the exact speedups. With a low threshold where all pixels stay “active” for a long time, there probably is very little benefit.

Hi Chris!

Thanks so much for taking the time to review the project and explain the current behavior of the adaptive sampling system in more detail. I will keep your guidance in mind as we continue working with the Adaptive Path Tracing. Looking forward to future improvements as the feature matures.

Thanks again!

Cheers,

Mateusz