I figured id make another comment and respond to each follow-up, just to keep my first comment nice and clean
how about Quest2, and FPS
When you’re developing a standalone Quest app, you’re basically making a mobile app. The Quests run on an Android framework, so you basically have to adhere to a mobile-centric development strategy. This means no dynamic lights, no post-processing, etc
Static Ambient Occlusion
Right click a material graph and find the node called PrecomputedAOMask. Plug the output of this node into the Alpha of a Lerp node, then a constant of 0 into the B input, with the rest of the material in the A input (this could be the other way around, i forget lol), plug the lerp into Base Color then apply the material to some objects in your scene and build lighting. You should now see AO on your objects! You have some per-material control here too, for example if you add a multiply node after the AOMask node you can control how much AO shows. Im planning on making a video about this, Ill keep you posted
RR Occlusion Queries
Sorry, I meant Round Robin Occlusion Queries, I think I mentioned it further up in the comment. Apologies for the confusion
Fog in VR
Give this a shot: https://www.youtube.com/watch?v=DnfFbFjxI_M
Its a post-process material, so youll need to add it to a post-process volume and be using deferred rendering (see my original comment for some pros and cons). To be clear, you can use component-based fog in VR, such as Exponential Height Fog and Atmospheric Fog, you’ll just incur a performance hit.
Static/Dynamic lighting and shadows in VR
The trick here relates to object mobility. Under the transform data in the details panel, there are three options: Static, Stationary and Movable. For your lights (f.e. a directional light) if you set it to movable, all shadows will be dynamic and the engine will assume the light will move (a day/night cycle is a good example of this). If you set it to stationary, the light will be included in your lighting build and you can change the intensity and color, but you cant move the light. If you set it to static, the light will be included in your lighting build and cant be changed at all in-game. Then, for your actual objects, if theyre set to static, the lighting will be baked and the objects cant be moved. Objects set to stationary or movable are able to cast dynamic shadows even around static lights. Using this knowledge you can take some control over how much processing the engine will spend on lights and shadows - like heavy props such as tables and cars and stuff can be static, but smaller objects you can pick up and interact with can be stationary or movable.
Code like its 1999
The meaning of this is that back in the day, the capability of game development software far exceeded the capability of real-time renderers and consumer-level PC hardware, so developers needed to come up with methods of optimization that greatly reduced per-frame processing time. Some of the bigger ones are baked lighting, object culling, and low-poly models. The reason we say ‘code like its 1999’ even today is because VR is extremely demanding from a computational perspective; you’re basically rendering every frame 3 times as well as requiring real-time motion tracking in full 3D, not to mention the pixel resolution of modern VR headsets being around the 4k mark. Imagine your PC running a game at 4k, but it needs to render every 4k frame twice for the headset and once more for your flatscreen and maintain 90fps at all times. So, to achieve this, we code like its 1999
Optimized meshes
If you’re working for a studio, you’ll be given a poly limit for the objects you make which will be tied to an acceptable on-screen polygon limit related to the complexity of your project. As a very general rule, the more the player will see/interact with an object, the more detail it should have. To learn more about low-poly/optimized modeling, check out this legendary thread on polycount: LOWPOLY (or: the optimisation appreciation organisation) — polycount
Multiplayer IK
Replicating that kind of thing over the network can present a world of issues. Consider the game Counter-Strike: Global Offensive. When you kill an enemy in a multiplayer server, their player model ragdolls onto the ground, but this is not replicated to all clients (in this example, the only thing thats replicated is the player model’s collision). Each client is going to see a slightly different ragdoll effect because its not something that every player needs to see in the same way. So its basically a matter of network traffic; only game-critical data should be transmitted to and from the server. Physics effects generally aren’t replicated.
LOD aggressively
Not necessarily using maximum LODs with extreme poly reduction, but making sure everything has at least a near and far LOD will help. Open a static mesh in the editor and check out the Details panel. There’s a category called ‘LOD Settings’ which has an option called ‘Number of LODs’. Set this number higher than 0 and click ‘Apply Changes’. Once calculation is finished, switch to wireframe mode and zoom in and out of the model - you should see the polycount in the top left corner of the window going down as the object gets smaller on screen. This is what you’re looking for. Also, in your main editor window, if you click Lit>Optimization Viewmodes>Quad Overdraw, you can see the overdraw rate of your scene. You want as much dark blue on screen as possible for as much of the scene as possible.
Material calculations/video memory
As a rule, anything you do in a material graph is handled by the GPU and anything you do in a blueprint graph is handled by the CPU. When developing, anything you can offload to the GPU from a blueprint should probably be taken. For example, you can add some rotation to an abject on event tick, but this wont be as performant as using a ‘rotate about axis’ node plugged into world offset in the objects material, but the end result will be basically the same. This is another one of those things that you’ll need to consider at the start of each project - hardware requirements, processing speeds, frame budgets, on-screen polycounts, texture references, all that good (read: headache-inducing) stuff. A lot of indie devs just start working and then optimize later, and whether this is a better approach than setting limits before development really starts is a debate to have another day