In my game, there is a build system that uses a single static mesh (which is a combination of the full actor’s static meshes) for a hologram preview of the object that is about to be placed. I have found that some of the more detailed objects (the ones with 20k+ triangles) have a very severe performance hit (FPS drops from 60-ish down to under 20) when the hologram is visible and near the player.
I have tried changing the hologram material from transparent to opaque and that doesn’t change anything, so I have ruled out overdraw (right?). The full actor (with multiple static meshes instead of one large one) does not hit performance hardly at all (even with the added niagara particle effects and animated glowing that is not in the hologram). Turning nanite on helps a lot (fps only drops about 10 fps), but the model looks terrible with nanite on.
Another odd thing I noticed is that when there is no hologram GPU usage is at about 80%. As soon as the hologram is shown, GPU usage drops to about 30%. I expected this would be the opposite and GPU usage would max out which would be why FPS drops. CPU and RAM usage do not noticeably change between no hologram and having the hologram.
What could be causing this odd behavior? What tools can I use to track this down?
Thank you for the response, however as I mentioned, changing the material didn’t resolve the issue. I changed the material to a simple, solid color opaque material and still experienced the problem.
Is the hologram mesh properly optimized? How many polygons does it have?
Its would be best if you would record a session of unreal insights and measure what part of the scene rendering is causing the most impact. Perhaps Unreal does some preparation when creating the preview? Compiling of shaders etc.
Insights should show if it’s just the hologram or something else.
50k for a preview mesh is excessive. Especially if you are using transparency.
I would suggest using lod models for the previews with reduced poly counts. You will get a lot of overdraw in the case of full detail models.
The temporal dither aa masked method skips the overdraw problem.
You can also try messing with the render commands staring with r.oit
r.oit… (render order independent transparency)
Tweaking them might help with the frame rate slightly if you have overlapping transparent objects, though it probably won’t help with overdraw from within the same object.
if 50k is excessive, how are people using models with millions? Is there a trick I’m missing? Also, again, the transparency is not the problem and has a minimal, unnoticeable effect (if any) on performance. Switching to a material that is as basic as you can get, with no transparency (Blend mode: Opaque) does not change performance.