We are working on animation series using Unreal Engine 5.3. We have a lot of high definition characters and our skeletal meshes weight between 1Go and 2Go per asset. Consequently the editor is slow down and we experiences crashes regularely.
First thing first we don’t fully understand why skeletal mesh assets are that large on disk with a roughly one million vertices mesh. Maybe it can be reduced a lot and solve our main issue by simply optimising these assets ?
In case it can’t be solve on assets we would like to reserve high definition skeletal meshes for rendering and lower the strain on the editor as much as possible. We have some ideas but we would like to check feasablility before investing a lot of time into it. For now, we are thinking about :
Runtime skeletal mesh tesselation (we don’t know if it could work with lumen)
Using LODs (we already investigate a bit and it seems that most of the time the editor is loading all LODs which we don’t want)
Changing the all pipeline to go for geometry cache instead of skeletal meshes
Making our own solution based on a subclassed skeletal mesh component with support for switching between two skeletal mesh asset that shares the same skeleton
As a test, I created a skeletal mesh asset that has 4 LODs, where LOD 0 has actual geometry, and the rest are auto-generated with LOD1-3 having 50%, 25% and 12.5% triangle count as compared to LOD0. The LOD 0 geometry has 500K vertices / 900K triangles. The size on disk is about 300MB. So 1GB for 1M verts doesn’t seem too out of place.
The thing to keep in mind here is that the asset will store 2 or more instances of the geometry. It will store a high-definition copy of the editable geometry (for each LOD that is _not_ autogenerated) and a copy of the renderable geometry for each LOD, whether auto-generated or not.
The problem here is that these geometry data are all stored uncompressed in the asset on disk. This is something we’re looking at fixing in future versions, since the geometry data should compress very well.
That said, 1M vertices for a skeletal mesh asset in 5.3 is very, very high. The original skelmesh rendering pipeline was never really designed to handle that many triangles -- hence the push to skelmesh Nanite (as an experimental feature) in UE 5.6 onwards.
Missed the second part of that question. Apologies.
Your best bet is probably to have a replacement script, or custom skeletalmesh component subclass, that automatically swaps out the low-resolution editor-use mesh with a high-resolution runtime mesh during cook. That way the runtime will receive the high-resolution characters, and you can happily use the editor without getting bogged down too much.
This is something Nanite for skeletal meshes is designed to fix though, but that’s only available in UE 5.6 unfortunately (with much improved support in 5.7).
Thanks for the answer. I think it is too early for the production to swap on Nanite for skeletal meshes even if we could make the upgrade to 5.6.
There swapping meshes using a custom component is the solution we had in mind and it’s good to be conforted in this direction. I have started to make some tests and their is a small caveat with UPackage at editor time.
When the high resolution mesh is loaded (from PIE or a custom tool) it doesn’t unload as expected from the reset of the soft object ptr. I also need to force the unloading of the package as well. It does not seem to be a big deal but i am not fully confident that i won’t create editor bug by manipulating package. If you have a list of good/bad practice regarding UPackage, it would be appreciated.
Assuming that no-one else is holding a reference to that mesh, this may be related to the fact that the editor doesn’t really run GC that frequently. You can run “trygc” to force a GC collection once you’ve let go of all references.
Failing that, you may have to trace to see who else is still holding a reference to that mesh (e.g. using FReferenceChainSearch::FindAndPrintStaleReferencesToObject)