Hi, I’m currently working with pathtracer that uses adaptive sampling to optimize render performance. I’ve encountered a visual artifact that appears under specific lighting conditions.
When rendering scenes with very bright regions, especially high-intensity light sources or highlights, I get a black border or halo around these bright pixels. This artifact becomes particularly noticeable when the surrounding area is darker.
To illustrate the issue, I’ve attached images:
- Without Denoiser and Without Bloom: The halo effect is clearly visible around the bright region.
- With Denoiser and Bloom: The artifact is somewhat masked, but still subtly present.
- Without Adaptive Sampling Artifact to compare
Are there recommended settings or techniques to mitigate this effect?
The error threshold is currently set to 0.001. I also tried smaller values that differ from the default.
Thanks!
Steps to Reproduce
- Pathtracer AA Settings
- Spatial Samples: 64
- Temporal Samples: 16
- Warm-up Frames: 50
- Engine Version: Unmodified engine build
Render Settings in DefaultEngine.ini
[/Script/Engine.RendererSettings]
r.ClearCoatNormal=True
r.Material.RoughDiffuse=True
r.Material.EnergyConservation=True
r.ReflectionMethod=1
r.GenerateMeshDistanceFields=True
r.DynamicGlobalIlluminationMethod=1
r.Lumen.TraceMeshSDFs=1
r.Lumen.ScreenTracingSource=1
r.Lumen.TranslucencyReflections.FrontLayer.EnableForProject=True
r.Shadow.Virtual.Enable=1
r.RayTracing=True
r.SkinCache.CompileShaders=True
r.MegaLights.EnableForProject=True
r.RayTracing.Shadows=True
r.DistanceFields.DefaultVoxelDensity=0.400000
r.AllowStaticLighting=False
r.LocalFogVolume.ApplyOnTranslucent=True
r.PostProcessing.PropagateAlpha=True
r.GBufferFormat=3
r.HeterogeneousVolumes.Shadows=True
r.Translucency.HeterogeneousVolumes=True
r.Substrate=True
r.Substrate.OpaqueMaterialRoughRefraction=True
r.GPUSkin.Support16BitBoneIndex=True
r.RayTracing.ResidentGeometryMemoryPoolSizeInMB=4000
r.Lumen.HardwareRayTracing.LightingMode=1
r.GPUSkin.UnlimitedBoneInfluences=True
SkeletalMesh.UseExperimentalChunking=1
r.Substrate.EnableLayerSupport=True
r.Streaming.PoolSize=8000
r.SkyLight.RealTimeReflectionCapture.DisableExpenssiveCaptureMessage=1
r.PathTracing.Experimental=True
r.PathTracing.SpatialDenoiser.Type=1
r.PathTracing.TemporalDenoiser.Name=NFOR
r.NFOR.FrameCount=3
r.NFOR.PredivideAlbedo.Offset=0.1
Render Settings in MRQ Preset
r.RayTracing.Culling 0
r.RayTracing.Nanite.Mode 1
r.PathTracing.MaxBounces 8
r.PathTracing.SamplesPerPixel 8192,0
r.PathTracing.TemporalDenoiser 1
r.PathTracing.SpatialDenoiser.Type 1
r.PathTracing.AdaptiveSampling 1
The only control for adaptive sampling is the error threshold and the total max-samples, but here it looks like the adaptive sampler is giving up too early around the light source.
How low have you tried going with with the error threshold? Have you tried using the r.PathTracing.AdaptiveSampling.Visualize cvar to help visualize where the sampler is stopping? That can be helpful to “see” what impact the threshold is having on the distribution of samples.
It would be helpful to be able to replicate the issue locally so we can dig into the root cause better. If you aren’t able to share the project, could you maybe see if you can replicate the artifact with just two cubes? I would guess the main important value is the base color of the ceiling and the emissive color of the emissive object next to it (as well as any specific exposure settings you might have locally.
Thanks!
Hi, thanks for the reply. I can reproduce the issue with a simple scene. The issue is already visible in the viewport. Changing the error threshold to smaller values, such as 0.000001 or 0.000000001, does not change the result. I have added an example project with a movie render queue.
When I tried to reproduce it in a small scene, I noticed that the black pixels were not visible when I used only temporal samples in anti-aliasing and set the spatial samples to 1.
[Image Removed]
Thank you for the simple repro - this does indeed appear to be a weakness of our adaptive sampling method, and is part of the reason the method is still marked as experimental.
I’m afraid that the only workaround for now will be avoiding adaptive sampling in this case, but we’ll definitely keep this in mind as we look to improve the method in the future.
Out of curiosity, how big of a speed improvement was adaptive bringing you on your original scene?
As far as MRQ, settings go, when using spatial samples=1 with all sampling in temporal samples, did you have reference motion blur enabled?
I had hoped to shave off one or two minutes from the rendering time of one frame by high render settings. However, due to the black pixels, I could not properly verify this. So, I have not achieved a good result yet. At the beginning of the optimization, the rendering time was 5-15 minutes.
Currently, our rendering times are 2-3 minutes without denoising or adaptive sampling.
I still have it enabled in the editor, and it works great there, in my opinion. However, it no longer enabled in the final rendering with the MRQ.
Reference Depth of Field and Reference Atmosphere are enabled.
Just as an update, I have made some improvements for the next release (UE 5.7) that should help address this case (and others) where adaptive sampling was falling short.
Thanks for reporting your experience!