I’m dealing with two related issues, much appreciate insights to untangle these. I’m working with photogrammetry-based sets, in this case a psychedelic potash mine in Siberia, the audience for this not being gamers, rather I’ll be projecting on a large screen in 3D, so one value I want to preserve is scale, as the UE4 plugin is standardized to human-scale 3D view of the world. The meshes export from the SFM app in meters, no way to change that, and as my script in 3DS Max for batch processing adding the required smoothing group doesn’t change scale, the import to UE4 sees this meter-based world in centimeters. If I multiply then mesh by 10 to get it back to meters, I see my point light sources have to be cranked max to really do anything. First question, is there a workflow to place the range of intensities possible in a point Light or Spot Light to fit larger scale objects/environments?
Secondly, if I scale the environment meshes down to what works for the light source intensities as is, the scale in the 3D will be wrong (hyperstereo). Do my mesh objects need to come in at the correct scale for the default range in light intensities to work properly on those mesh objects?
There’s yet more to this problem, if I may continue. My set isn’t actually a monolithic mesh object, I export in parts with each part set to max vertices per part, an export of even a single chunk of a large set can have 500 fbx meshes, these sharing some twenty 8K albedo texture maps. The UV islands on these maps, depending on high frequency detail in the environment, are typically way too small to support static or stationary lights in UE4 as baked lightmaps come up black seeing these small UVs. I use movable lights, with a GTX 1080 am sustaining 60 fps with 40 million polys and many dozens of 8K maps, leveraging occlusion culling to maintain decent frame speed. A lot of what I’m saying surely flies in the face of conventional workflow, but keep in mind, my content is all about the environment, no gameplay to speak of, no flying swords, etc., so I’ve been getting away with the heavy geometry and texture maps, etc. without pushing into a serious approach to optimization. Another much broader question, taking these moving parts into consideration, how might I rethink the current workflow?
Many thanks for insights.