Optimizing textures/polycount from SFM (photogrammetry)

I’ve recently switched SFM software, see some options for how a large set is reconstructed and unwrapped for texturing, need to consider what limits UE4 imposes and how best to leverage LOD and such. I’m clear this question poses numerous rabbit holes, am here to focus on a logical first step. But in order to avoid the rabbit holes, it’s helpful if I first frame my understanding of the larger ecosystem that must be considered with “optimization”. I’m aware of the use of normal maps and displacement maps to do the heavy lifting for geometry at micro and mid-frequency detail, that ideally one defines a sweet spot crossover between the work of discplacement maps and polycount for a given area (this controlling low frequency detail in the scene, i.e. the macro structure of a space or object) such that meshes are decimated as much as possible without their silhouettes becoming over simplified.

It’s my understanding that LOD can be leveraged to sustain higher polycount meshes the smaller area they represent in a set. That is, the more a scene is chunked down into smaller pieces, the greater the ability of a virtual camera to leverage occlusion culling to a) only load those pieces that aren’t occluded and that fall within fov and b) to load those pieces nearest to the camera at the highest LOD, allowing greater latitude for the number of LODs to fall off with distance. Am I on track so far?

My SFM solution supports exporting a mesh with a single texture in parts, and I have control over max polycount per part, each of which go out and drop back into UE4 with a common scale and coordinate system, this with no visible seams. This single texture can be up to 16K, but I read here that UE4 can only be configured for 8K max resolutions, can this be confirmed?

I realize 16K textures are whopping huge, but my deliverables may assume beefy system resources, and if a desktop only has to load the texture once while then parsing so many tiny mesh parts, then I’d think this would well define “optimimized”. In my case, it’s all about the fidelity of a scan-based virtual environment, no slinging swords and such, so just to note why I’m pushing for answers in this direction.

A second level of chunking can be done with setting slightly overlapping “reconstruction zones” in a larger set, these also exported with a common scale and coordinate system, but here I’ve not found a way to completely hide seams, which are straight (defined by the walls of the reconstruction zone) and easily show up in a scan of a natural environment. I only mention this because this alternative is less attractive because of the problem with slightly visible seams, wish to exhaust exploring what’s possible with mesh parts using a single texture, rather than many meshes each with their own texture, though wish to hear how these two approaches are impacted by LOD and which approach is smarter overall in optimizing for a high-fidelity virtual environment.

Thanks!