Handling a heavy quantity of unique static meshes

Hi everyone !

This post could belong to the Archviz and VR section aswell, but is also about rendering optimization in general.

So, we have been trying to solve an issue since a few months already, but could not really find a suitable solution. Lots of text incoming, you’ve been warned ! (I made the important bits in bold for visibility)

To put it simply, we are working on a UE4 based solution to review large BIM projects made with Autodesk Revit in Virtual Reality.
Our projects are fully interactable (ie doors, stationary lights, object Metadata, etc.) with a fully dynamic Day and Night cycle.

The common workflow would be to export an FBX file from Revit, modify it accordingly in 3dsMax by merging the meshes for better performance and Unwrapping them, and finally import the resulting meshes in UE4.

The problem is, we can’t merge our meshes, since each Revit/StaticMesh object must be distinguishable from the others (and is linked to a database) : when you have more than 10.000 of these items, performance is a huge problem for VR !
And since each object is unique, we can’t use instancing nor mesh batching or merging methods.

We tried to tweak the HZBOcclusion Culling used by default, but it seems that the engine can’t really handle well that much objects dynamically.
Another method I tried is to Precompute Visibility of the scene and use that for scene culling, but although the performance si better, it is still nowhere enough usable in VR.

Now our workflow for lighting might not be the best for performance (Static baked Skylight with AO for ambient lighting + Movable Directional Light for the sun + Stationary for interactable lights and Static for the rest) but the results are visually convincing and being able to change the time on the fly is a no brainer for demonstrations.

Anyways, in a few lines that’s where we are right now. Things we have in mind that remain to explore are LODs (but the meshes aren’t really polycount heavy, there’s just a lot of them), sublevels (might require too much work for each project), creating merged chunks of the project in 3ds max and use them for distant rendering instead of individual objects (might not work well with culling) and faking individual objects by using only their colliders (closer to the common archviz workflow, sounds like a good idea but light baking and lightmap UVs are going to be monstruous).

We also had some very interesting results in performance by using the Nvidia’s VRworks branch of UE4 (but we moved to UE4.15, and their branch is 4.14 and has not been updated yet).

What are your thoughts on this matter ? :slight_smile:

Cheers,
aka LegendreVR

EDIT : The hardware we currently use is a GTX1080, a E5-1650v4 Xeon @ 3.60Ghz and 32Go of memory.

Hi there,

can you share some more numbers ? CPU and GPU Times, Trianglecount etc. Also it would be important to know how big your static Meshes and your whole scene is, maybe post a screenshot ?

Hey PrHangs, here’s some stats examples for a project without interactions other than moving around, “static” lighting and a limited number of entities :

  • Actor count = 8914
  • 1 Stationary baked directional light
  • 1 Stationary baked SkyLight
  • Occlusion is working as intended after a FREEZERENDERING verification.
  • The building is around 13000x11000 UU wide

Here’s a preview of the model complexity :


Project settings :

  • FSAA enabled
  • all other effects disabled
  • Instanced Stereo Enabled
  • Deferred Rendering
  • r.TextureStreaming 0
  • r.ScreenPercentage 200
  • vr.EnableMotionControllerLateUpdate 0

Reducing the ScreenPercentage does not seem to improve GPU performance that much (past 200 destroys it though).

Engine stats :


Scenerendering stats :


GPU usage is crippled by the static drawpass (to be expected with that much static meshes)


The CPU Usage reported in SteamVR sounds a lot higher than it should be…

Can you please re-take your stat commands screenshots but this time do so while you are using the game in VR. Taking them from the Editor like you are doing now does not show how things are truly running, so it is hard to say what is causing your issues. From the info I am looking at now, with everything in the green, your project looks to be running fine.

Hey Sam, thanks for having a look at this.

These frametimes are ok for sure for 2D, but for VR the frametime must not go beyond 11ms to have a smooth experience. Async Reprojection is a solution for framedrops, but relying on it to get 90fps sounds like a bad idea and makes everything stutter for camera and object translations.

Here’s some screens inside and outside of the building.


As you can see we are well over the 11ms cap.

This is with ASR enabled in the SteamVR settings, disabling it gives us a mess with lots of framedrops (as expected).

Thanks for re-uploading the shots, the new ones do give me a better idea on what is going on but you already tracked the issue down and that is you are just drawing too much. Have you checked into using both cull distance volumes and manually setting the cull distance on objects? Due to your restrictions ( you mentioned that all assets are linked to a data base so they have to be uniqie) this is probally the only thing you can do with out modifying the engine to work with your required workflow. Tim Hobson from the doc team did a great write up on his blog covering culling which you can read using he following link.

http://timhobsonue4.snappages.com/culling-visibilityculling.htm

Give that a try and let me know how it goes.

I’m not sure why you really need that many separate objects to be able to display in VR

Thanks Sam !

I already applied some of the techniques Tim mentioned in his tuts (really well explained btw, he’s done a great job !), but I’ll have a deeper look at cull distances tomorrow and get back to you in this thread.
Meantime here’s a screenshot of the Initviews stats without touching at cull distances :

I wish I could merge all these objects by material or something along those lines, but we are talking Industrial projects reviewing, which is a long way from the videogame workflow (I will surely loose my hair after a few years working Revit made projects, which are a mess to export/import from a realtime 3D modeling perspective).

Each object has its own properties, and needs to be separated from its neighbour so you can interact with it (isolate, measure, get metadata, etc).

After a few years modelling and developing classic game VR content for the DK1/2 and the GearVR, chasing each drawcall, it really feels like a different way to handle everything…

> since each Revit/StaticMesh object must be distinguishable from the others (and is linked to a database)

Can you give more details? How must they be distinguishable, visually, or through some kind of UI selection interface?

If just through a selection interface, what about a system using HLOD? When the user selects an HLOD cluster, hide the cluster and unhide all the objects that were used to make up the cluster. Then repeat the selection trace to find and highlight the individual object. When they unselect it, rehide all the objects in the cluster, and re-show the cluster.

You say the project is fully interactable, but only mention doors as being the things that actually move, right? Or can placement of the walls, etc. dynamically change too?

Also, have you tried with DX12 or Vulkan?

To keep it short we use the name of each object as his ID, then check a CSV file to find the right metadata. The user can highlight an object by aiming at it and a UI will display the related informations.

That’s kinda what I had in mind as an optimization solution, loading groups of separated objects in the vicinity of the user, and using merged meshes for medium and long distances (basically a custom lod system).

For now we have a restricted choice of elements the user can interact with (we want to be able to spawn them at runtime in VR), including doors and physical objects you can move around. On the smaller projects we also built tools to toggle stationary IES lights (movable lights have too much performance impact).

I am not sure the scale of our projects would allow to modify every object, but we want down the road to be able to draw in 3D space like in Title Brush. I wonder if there is an asset for this kind of tool in the market ?

Aren’t new projects already set to DX12 ? And is Vulkan stable enough for a daily use ?

I have good hopes for Nvidia’s VRWorks branch of UE4 (on a test project with Lens Matched Rendering, performance went up by at least 40% without a difference visually), but we moved our blueprints to 4.15 and can’t rollback to 4.14.

So much to explore, so little time ! Being a team of one I can’t allow all of my time to a single R&D subject. :stuck_out_tongue:

Right so I got the editor to run in DX12 mode, but it seems my 1080 only has a 11_0 feature level ? Isn’t Pascal up to 12_1 ? My drivers aren’t up to date nor outdated either, so maybe I am missing a DX12 library somewhere (if dx installers are still a thing) ?

I recommend using Level-Streaming for keeping the actor count low. especially in the case of a building you can make each room interior as a sub-level that is only streamed in if the player approaches.
make the outer Shell of the building and the landscape Environment as persistent Level and put everything locally in sub-Levels.

@LegendreVR: Have you made any progress ?

Hey PrHangs,

Not really, DX12 crashes on compiled projects, sublevels require too much work considered the size of our projects, and manual cull distance would require too much time to define for each object.

I had some OK performance boost by merging all the meshes into one big blob and using only individual object colliders for our metadata tools, but sometime (on bigger buildings) the merged mesh simply don’t display for some reason (I’m using the render in main pass toggle).

The easiest way would be to wait for the VRWorks 4.15 branch, I don’t have that much time dedicated to debugging and performance improvements.

SteamVR recently added experimental support for DX12 (“Use at your own risk”) but UE4 still uses an older SDK without DX12 support. Also, SteamVR seems to be more focused on Vulkan currently, the Vulkan support seems to be way more worked on than DX12. That’s good, since there’s really no need for DX12 when you can have Vulkan. But UE4 currently does not support Vulkan in VR, so you are limited to DX11 for now. For drawing so many individual objects, you really want to use a low level API, its definitely not a strength of DX11.

So you can just hope that Vulkan VR support is coming soon in UE4, which is hopefully the case: Vulkan status - Rendering - Unreal Engine Forums

Thanks for the heads up John. We are not bound to a single api since our tools are intended for internal use, and Vulkan indeed has a lot of interesting features.

I’ll subscribe to the thread you linked in case there’s some news about VR+Vulkan. :slight_smile: