What are the Performance-killer for VR(eg.OculusQuest2)? all postprocess, all screen-space?

Alle great points . Nice list. I do believe some options were removed a few versions ago. Like the Mono Culling Distance and the Legacy shading model seems to be gone as well. Features come and go every new version.

See more here for the Mono Culling Distance;

2 Likes

I figured id make another comment and respond to each follow-up, just to keep my first comment nice and clean :slight_smile:

how about Quest2, and FPS

When you’re developing a standalone Quest app, you’re basically making a mobile app. The Quests run on an Android framework, so you basically have to adhere to a mobile-centric development strategy. This means no dynamic lights, no post-processing, etc

Static Ambient Occlusion

Right click a material graph and find the node called PrecomputedAOMask. Plug the output of this node into the Alpha of a Lerp node, then a constant of 0 into the B input, with the rest of the material in the A input (this could be the other way around, i forget lol), plug the lerp into Base Color then apply the material to some objects in your scene and build lighting. You should now see AO on your objects! You have some per-material control here too, for example if you add a multiply node after the AOMask node you can control how much AO shows. Im planning on making a video about this, Ill keep you posted :wink:

RR Occlusion Queries

Sorry, I meant Round Robin Occlusion Queries, I think I mentioned it further up in the comment. Apologies for the confusion :slight_smile:

Fog in VR

Give this a shot: https://www.youtube.com/watch?v=DnfFbFjxI_M

Its a post-process material, so youll need to add it to a post-process volume and be using deferred rendering (see my original comment for some pros and cons). To be clear, you can use component-based fog in VR, such as Exponential Height Fog and Atmospheric Fog, you’ll just incur a performance hit.

Static/Dynamic lighting and shadows in VR

The trick here relates to object mobility. Under the transform data in the details panel, there are three options: Static, Stationary and Movable. For your lights (f.e. a directional light) if you set it to movable, all shadows will be dynamic and the engine will assume the light will move (a day/night cycle is a good example of this). If you set it to stationary, the light will be included in your lighting build and you can change the intensity and color, but you cant move the light. If you set it to static, the light will be included in your lighting build and cant be changed at all in-game. Then, for your actual objects, if theyre set to static, the lighting will be baked and the objects cant be moved. Objects set to stationary or movable are able to cast dynamic shadows even around static lights. Using this knowledge you can take some control over how much processing the engine will spend on lights and shadows - like heavy props such as tables and cars and stuff can be static, but smaller objects you can pick up and interact with can be stationary or movable.

Code like its 1999 :wink:

The meaning of this is that back in the day, the capability of game development software far exceeded the capability of real-time renderers and consumer-level PC hardware, so developers needed to come up with methods of optimization that greatly reduced per-frame processing time. Some of the bigger ones are baked lighting, object culling, and low-poly models. The reason we say ‘code like its 1999’ even today is because VR is extremely demanding from a computational perspective; you’re basically rendering every frame 3 times as well as requiring real-time motion tracking in full 3D, not to mention the pixel resolution of modern VR headsets being around the 4k mark. Imagine your PC running a game at 4k, but it needs to render every 4k frame twice for the headset and once more for your flatscreen and maintain 90fps at all times. So, to achieve this, we code like its 1999 :stuck_out_tongue:

Optimized meshes

If you’re working for a studio, you’ll be given a poly limit for the objects you make which will be tied to an acceptable on-screen polygon limit related to the complexity of your project. As a very general rule, the more the player will see/interact with an object, the more detail it should have. To learn more about low-poly/optimized modeling, check out this legendary thread on polycount: LOWPOLY (or: the optimisation appreciation organisation) — polycount

Multiplayer IK

Replicating that kind of thing over the network can present a world of issues. Consider the game Counter-Strike: Global Offensive. When you kill an enemy in a multiplayer server, their player model ragdolls onto the ground, but this is not replicated to all clients (in this example, the only thing thats replicated is the player model’s collision). Each client is going to see a slightly different ragdoll effect because its not something that every player needs to see in the same way. So its basically a matter of network traffic; only game-critical data should be transmitted to and from the server. Physics effects generally aren’t replicated.

LOD aggressively

Not necessarily using maximum LODs with extreme poly reduction, but making sure everything has at least a near and far LOD will help. Open a static mesh in the editor and check out the Details panel. There’s a category called ‘LOD Settings’ which has an option called ‘Number of LODs’. Set this number higher than 0 and click ‘Apply Changes’. Once calculation is finished, switch to wireframe mode and zoom in and out of the model - you should see the polycount in the top left corner of the window going down as the object gets smaller on screen. This is what you’re looking for. Also, in your main editor window, if you click Lit>Optimization Viewmodes>Quad Overdraw, you can see the overdraw rate of your scene. You want as much dark blue on screen as possible for as much of the scene as possible.

Material calculations/video memory

As a rule, anything you do in a material graph is handled by the GPU and anything you do in a blueprint graph is handled by the CPU. When developing, anything you can offload to the GPU from a blueprint should probably be taken. For example, you can add some rotation to an abject on event tick, but this wont be as performant as using a ‘rotate about axis’ node plugged into world offset in the objects material, but the end result will be basically the same. This is another one of those things that you’ll need to consider at the start of each project - hardware requirements, processing speeds, frame budgets, on-screen polycounts, texture references, all that good (read: headache-inducing) stuff. A lot of indie devs just start working and then optimize later, and whether this is a better approach than setting limits before development really starts is a debate to have another day :slight_smile:

5 Likes

so firstly add a common diffuse main material for everything, and if need add another material(eg. a specular mtl for metal), so what’s the suggested MAX count of material(eg. FPS on Quest2)?

you mean it’s better to open the ‘Full Rough’ first for all the material?then use other way to get wanted effect?such as use fresnel node and normalMap based on it?(i thought i might misunderstand here :joy:)

‘Beware normal maps, however, because in some cases they can display differently in each eye.’ so…it should be very careful while use normalMap?is there any detailed notes about this?

so it’s ought to use local-PP-volume at the location where does need it?

what tools do you use to optimize? i like renderDoc, but the previous oculus version is not able to CaptureFrame on Quest2. yesterday download their new version and hope it’ll work

Thanks.
is there new options need to set in UE4.27.9 related to this?

Ive used the AMD profiler for identifying things like computational bottlenecks (whether the CPU or GPU is slowing down the software) but beyond that I dont use any other tools than built-in UE tools. Optimization viewmodes is a good place to start, the ones I use most are Quad Overdraw and Shader Complexity. You can also enter the console commands ‘stat net’ and ‘stat fps’. My usual workflow is to develop things, do my best to maintain fps, and then hand over the project to a technical supervisor for more explorative profiling.

https://developer.amd.com/amd-uprof/

these two?

looking forward to it very much :smiley:

no, no, my apologies…lack of VR dev experience, sorry.

isn’t ForwardRender the ONLY choice on VR? :joy:i know these types of fog, but not sure about the meaning of component-based here & search the doc later. JUST think no fog is very weird, isn’t it? all VR games have no fog?can’t imagine that,lol, that might be before 1999 at this point…if some have, is there a common tech, still take FPS game as example.

what’s the 3rd time for? ‘flatscreen’ of VR?

i’m programmer, not artist. so is there specification of triangle count for character, gun, building, landscape, etc for OculusQuest2? and i’ll look the link you gave :grinning:

how about an upperBody IK whose head&hands are controlled by HMD&joysticks? donot why current VR game only have hands but no arms and upperBody

This is another one of those things that you’ll need to consider at the start of each project - hardware requirements, processing speeds, frame budgets, on-screen polycounts, texture references, all that good (read: headache-inducing) stuff. A lot of indie devs just start working and then optimize later, and whether this is a better approach than setting limits before development really starts is a debate to have another day
that’s why i want to find and set a standard for Oculus Quest 2 for artist dev:)

already no this option in UE4.27.0

//
Box/SphereReflectionCapture is also not allowed to use in VR?
//
how about the GPU instacing in VR? what if plant grass and trees?
//
as SSR is not able to be used, so it’ll very difficult to have Water? including water spot, puddle, river, sea?
//
anything related to LightMass should not be used, eg. LightMassImportanceVolume ?

Yeah that’s it :slight_smile: Will only work if you have AMD hardware, though.

Master material with instances

Check out this video for how to do it: https://www.youtube.com/watch?v=AeOQZWEi1gU

It doesn’t cover PrecomputedAOMask but if you follow the video then you’ll get a master material that you can configure to look like basically any surface you want (if you have the textures).

1 Like

ok, right now i’m using Quest2, and i’ll try the tools oculus provided.
Thanks again for GREAT notes, Thanks a lot!

1 Like

Target Hardware
is it better to left it Unspecified if the project might be cross-platform, and set each setting manually?



actually i asked this topic before:

and please check above, i wrote all in 1 reply, for cleaner.

1



This article from Oculus describes their alternative solution for tone mapping. I haven’t dug too deeply into it yet.

Every frame 3 times

Once for each eye and a third time for your monitor, since VR games do still display on the PC they’re running on, as well as the headset. Obviously for Quest/2 its just twice.

Upperbody IK, and why VR games usually only show hands

Aside from the technical challenge, this relates to the psychology of VR. Ever played Pavlov on Steam? The general issue with arm IK is that the software doesnt know which way your elbow is pointing, and people tend to get uncomfortable when the arms in-game don’t match what their arms are doing irl. As an example, you can hold you hand in front of you and move your elbow about without rotating your hand, but the software has no way to know that you’re doing that. For seasoned VR gamers this isn’t an issue, but for new players it can be. Now consider Half-Life: Alyx. As we know, Valve focus-tests to an absolute fault and after some five years of development and testing they decided to stick with the disembodied hands that are familiar to most VR apps. Probably for the same reason that ladders in that game are teleports, not some kind of manual climbing mechanic. Gorilla Tag has great body physics, if you haven’t checked that game, you definitely should - it’s not just an interesting VR case study, its bonkers fun :stuck_out_tongue:

isn’t ForwardRender the ONLY choice on VR?

Not at all! The original question was about VR performance, and Forward Rendering gets better performance than Deferred Rendering, but they’ll both work just fine in VR.

Component-based fog

By component-based, im referring to the engine components that generate fog, namely Exponential Height Fog and Atmospheric Fog.

Target Hardware

For Quest/2, you want it set to Scalable 2D/3D and Mobile/Tablet. The major thing these settings do anyway is set a bunch of console variables in your project’s config folder.

Use Legacy Shading Model no longer in the engine

This was pointed out above, some of the settings ive suggested have been changed or removed in recent versions :stuck_out_tongue: Im still working in 4.24 and 4.25 since 4.26 was pretty broken for VR and I havent dug into the OpenXR template much yet but its on my plate at my day job right now.

Reflection captures

As far as Im aware, reflection captures work fine in VR. Probably not in a standalone Quest app, though. I work in PC-powered VR (Reverb G2 and Rift S mostly) about 90% of the time, and what Ive learned about it is that almost all ‘flashy’ features of the engine aren’t compatible with the latest UE4 android SDK.

Plants/grass/trees

There are ways. The traditional alpha-card method of creating foliage isn’t performant enough for VR, but you can (and this isn’t actually as time-consuming as it sounds) individual model each blade of grass or each leaf on a tree and not use alpha at all. You can also LOD and cull foliage meshes, which will help too. I’d highly recommend talking to a technical artist about this specific topic, because further than very general performance advice, every project has different demands.

Water

Not with the 4.26 water component lol. For a puddle or relatively still water, you can use a plane with a highly reflective surface, a normal map, and maybe some world displacement to simulate the behavior of water. For oceans in a Sea of Thieves style or a beach with waves, im honestly not sure. High-end VFX like that may be very challenging to recreate in VR in a performant way. I could be wrong! Im thinking, and I cant name a game that uses water in VR.

Lightmass

Nah you absolutely need a Lightmass Importance Volume. Just around the areas where the player will go, but you’ll need one of these components. Your Lightmass settings (in the World Settings window) will determine how the lighting build will be generated, and (if I’m remembering correctly) the engine will throw up errors if you don’t have one in your scene.

Quest 2 polycounts and standards

Yeah, none that Ive found, sadly :frowning: Im certain that there is an upper limit for poly count, but i dont know what it is. This guy probably does, though: https://www.youtube.com/c/GDXRLEARN

Highly recommend checking out his videos, he develops for Quest and Quest 2 and has heaps of videos.

I would also strongly suggest checking out both these links:

How Epic optimized UE4 for Robo Recall: Make & Maintain Framerate! "Technical Postmortem for Robo Recall, and Beyond!" by Nick Whiting

How Drifter optimized Robo Recall for Oculus Quest: Learn how Drifter Entertainment leveraged elegant optimizations to bring Robo Recall to the Oculus Quest - Unreal Engine

There’s also this excellent UE4 level slice breakdown that covers the fundamentals of environment design in thorough detail (you can download a pdf of the whole case study via google, im not sure if linking it directly is allowed here): UE4 The Corridor Project: Step-by-Step Workflow to Construct an Environment in 10 Hours with Unreal Engine 4 Download

4 Likes

Thanks again!! :grin:

even with PC-powered VR? lol, seems i can give water up temporarily as i’m working on Quest2.

by the way, do you familiar with multi-platform dev(eg. PC-VR and mobile-VR). is there any good note and tutorial for this? eg. 1st step i think is to config different RenderSettings for them.

The closest to that that i’ve achieved is to have two different projects, one configured for PC-VR and the other configured for Android-based Quest VR, then just migrate assets between the two. I haven’t yet achieved a singular project where I can package for Windows and then package for Quest. That said, I haven’t spent a meaningful amount of time working on that specifically, but I’ve got a lead, and hopefully some other devs can confirm this because we’re at the very edge of my knowledge with this question :stuck_out_tongue:

In the editor, go to Window>Developer Tools>Device Profiles. With this window, you can set and edit cvars and console commands based on which platform you’re packaging for. Now, don’t quote me on this, I haven’t explored it much, but I think this can be the way that you can have 1 project able to successfully package for multiple platforms.

//
DisplacementMap in VR?
//
paticle system in VR?
//
and do you know when should i change the default display refresh rate of Quest2?
https://forums.unrealengine.com/t/need-to-change-display-refresh-rate-for-quest2-while-dev-game/249148

Thanks!

Displacement maps in VR

Avoid. If you really need the extra geometry, just add the extra geometry instead of using one of the most costly material effects in all of CG :stuck_out_tongue:

Particle systems

I use a couple particle systems in my current VR project, namely a muzzle flash effect (combining smoke and sparks) and a cloud of moths for light sources (a single orbital particle effect). So yeah, traditional particle effects are compatible, but I haven’t used Niagara effects in VR yet so I’m not sure how that goes in terms of performance.

Moderation is key! Have a go and see if the effects you want to implement incur a performance hit, and use your best judgement to decide whether that performance hit is acceptable or not. Always test for performance on your minimum target hardware.

VR refresh rate

Much like traditional monitor refresh rates, this is linked to your game’s frame rate. Consider a 60Hz flat-panel computer monitor. At 60fps, you’ll get the smoothest experience possible because the number of frames that the machine is rendering matches exactly the frequency that the screen can display frames. If your FPS is lower than 60, youll get pauses (delays between frames while the screen waits for the PC) and if its higher than 60 youll get tearing (screen renders the next frame before it’s finished displaying the last frame). At 30fps youll also get a smooth-ish experience because the refresh rate = fps * 2. The same is true in VR, but the effects are way worse. If you drop below your HMD’s refresh rate, youll get ghosting, tracking latency, delays, and basically all the things that make people sick. The current VR standard is 90Hz, so your project needs to hit 90fps or higher 99+% of the time.

No worries, glad to be helping :slight_smile:

2 Likes