- First thing I recommend is reading the UE docs, they are essential to clawing performance back.
- Large scale how? Prevelence in the world? Material complexity? Math complexity? Collisions behavior? Lighting behavior? All of those have different answers.
- I’m not a VFX specialist but AFAIK it needs to be manually set up. Effective implementation depends on your effect type, as there are different ways to gracefully fall back complexity in the distance depending on what the effect has to do.
- Depends on what you want pooling to look like.
To take a crack at the CPU to GPU question, that partly depends on how your system is being taxed and what you need the Niagara particles to do. If you’re creating a rain system where you have to draw an absurd number of particles with very simple behavior (basic collision if that), then GPU is your best bet. If you have complicated particle systems that are querying world data in a GPU-unfriendly way, you’ll need to use CPU. In general, if you’re GPU bottlenecked, CPU may be the way to go. Vice-versa certainly for CPU. It also depends on if you’re more bound by compute or memory, as you can cut resolution easier than simplifying complex behavior. Use your transparency overdraw visualizations and make sure to keep it to a minimum. It probably goes without saying, but don’t use any of the fancy Niagara fluid sim stuff if performance is even remotely a concern.
For certain LOD techniques: you can make almost any particle system fail gracefully in the distance, the question is all about what you’re willing to sacrifice. If you can switch complex 3d particle systems to flipbooks in the distance (especially if they’re instanced a ton), then you can get enromous cost savings. If you have lit particles, you should really ask yourself if they truly need dynamic lighting, because not paying for dynamic lighting is a huge saver.
If your particles actually spawn lights, they should probably be shadowless. Even if you’re using megalights they should be shadowless for quality reasons. No rect lights, no shadowed lights, especially no volumetric shadowed lights. Only break these rules if you have a ■■■■ good reason to, like a hero light rig close to the camera. Even then you should consider hacks before anything performance heavy, because it’ll probably be big on screen and tax overdraw.
If they need collision and they’re GPU particles, avoid using hardware RT collisions, and unless they’re loud VFX like snow or rain, consider just depth buffer collisions. Have simplified proxies for CPU particles to raycast against, avoid complex collision wherever you can.
As for profiling, I honestly cannot tell you because I’m not a particles guy, and when I’ve had Niagara problems I’ve had colleagues help me out. But I’ve learned a thing or two nonetheless.
Just a few suggestions, take them or leave them.
One final note:
if you’ve spent a good amount of time as an artist, in UE or in another DCC, your eye is very well-tuned to detail. If your game involves fire, magic spells, and explosions, there’s a fair chance it’s an action game. If it’s an action game, people will be processing the scene fast, and the human eye, especially the untrained eye, is already LODing the scene mentally to accelerate reflexes.
What that means is, for action-oriented VFX, focus on shilouette value and broader color, and don’t worry too much about fine detail or minute behavior only you will notice. Optimization starts with content, and your content starts with your game concept. Silent Hill’s signature fog was a way to art around perf limitations, and you can always do the same.