Optimizing Arch Viz in UE4 for Android

Hi All, sorry for the newbie question, but I’m interested in learning about how you might be able to optimize UE4’s arch viz type renderings for Android for a Gear VR type format or perhaps a VR app suited for Google Cardboard. Any thoughts? Thanks!

I´m waiting for this answer too…

GearVR specs are ridiculous…50 max 100 draw calls…50k max 100k triangles…consistent 60fps, no transparency, no post process, one pass shaders no dynamic lighting…

If you’re looking for realism in ArchViz forget developing with GearVR…yes you’re wireless, but you’re running a PS2 game, not very tempting…

Or just make 360 panoramas for Gear vr. Better than nothing I guess!

Agreed. We run a big studio here. 360 panoramic renders are supereasy, fast, beautiful… All you could with for, except for one thing: you have a fixed camera location so no “walking”. On the other hand, walkthroughs on gearVR are very hard to do, take a lot of time and effort (= money) and in the end look crappy, so financially a losing game.

I’d love to give a project like this an attempt, but I don’t have access to a gear VR. You could easily make a decent scene with those limitations, just no fancy materials or shaders, avoiding glass and mirrors. Simple enough that you wont need super large or detailed lightmaps. I bet you could make it almost look as good as a desktop with the right small scene and assets, until you look at the assets close up.

Yeah, but basically what you are doing is selecting or designing a specific, superlimited scene based on the limitations of your device, and then saying "see, of course it’s possible to do cool arch viz on a gearVR. That would boil down to sketchup-level visuals, with some low-res global illumination on them, and horrible, horrible contact shadows and reflection. As posted above, if your interior scene happens to look like a PS2 style render, then yeah, you could pull it off.

By the way, you probably have a mobile phone, either Android or iOS. See what you can achieve using that, and then strip all postprocessing and halve the polygon count.

I guess I could try turning one of the market scenes into something android ready. But yes, you have to work around the limitations of the platform. There’s some cheats and work arounds you could do to get stuff like contact shadows back. Games are all about faking it to get the result you want.

True, but trust me: you very, very, very rapidly run into the point where the proportion “tricks / cheats / workarounds” vs. visual result means you spend a lot of time and effort and get back a very poorly looking result. Yes, there’s always tricks you can do, but realistically it’s not feasible unless you’re willing to put in a disproportionate amount of work.

To compare and exaggerate a bit: you can get the same results in Paint that you get in Photoshop, with enough tricks, cheats and workarounds, and if you choose the right image and look to go for. Still does not make it a good or even interesting option.

here you go
https://developer.oculus.com/blog/squeezing-performance-out-of-your-unity-gear-vr-game/ ( sorry it’s close to Unity specs but the pipeline specs that VR Gear requires is about identical to UE4 but of course UE is better :wink:

I think the best way is to keep in mind the "technical to art " aspects of your pipeline- like keeping your poly count low, optimizing UV, and using old school game art techniques, vertex lighting? maybe that’s too far. The best pipeline to follow in my opinion is the game industry ( perhaps I’m biased ). I think at best, this generation smart phones are comparable to between the specs of the Playstation 2 and 3. I think once Vulkans api ( Home | Vulkan | Cross platform 3D Graphics ) is accepted, this should speed things up a bit on the mobile side. I hope that helps.