I have not implemented it fully myself, but I have played around with the type of systems just not together. The things you will run up against, and potential workarounds:
the weights you will see will be Character, and Actor. there is not an absolute given limit to the number of actors in a given Level, but Actor can get heavy on system Memory very quickly, typically you want to try and stay less then 1,000 Actors if you can help it, and 1,500 can easily torture even higher memory systems. then after 1,500 you could have slowdown in Collision response. ActorComponents can alleviate a bunch of this but there is a practical limit to the number of ActorComponents, which is a lot more fuzzy.
if we are specifically talking about Cube collisions you game a bit of performance where the BinarySpacePartitioning can work in Axis aligned boxes is rather fast for BSP.
One of the bigger limitations would be World Size where even with Level Streaming unless you are doing coordinate shifting on your “static” world there is a fundamental limit on the world size. World patrician, and the shift to transform being based on double instead of float will be doing a lot of heavy lifting, just be willing to reign in draw distance aggressively.
for LoD and rendering if most things are Cube based Nanite could be too complex (and might just lead to optimization headaches) because if everything is already a cube, there is little point with breaking it down into micro-triangles.
The Biggest hurdle after the Actor and SceneComponent instancing you will run into is the Texture instancing, and creating sub-chunks; In Minecraft there was the general Chunks of the biomes, but then there would be sub-chunks for resources. the Textures could be handled through Atlasing (instead of a “treeTexture” and a “leafTexture” you could have 4 tile-able textures on the image that would define the object, and then reference the sub texture of the Atlas just keep is Square, and powers of 2 for the best effect)
a 3D array is not an absolute thing, and is easy to get fragmentation, and sizing issues. For best effect whatever 3D array implementation you come up with will most effectively be in C++. after working with different methods to create a 3D array here are some tips:
-
don’t try to fake it with a 1D array you end up with so many move operations, and potential overlaps you often end up just remaking the whole thing for every resize operation, which just blows up the Heap every single time. then there is memory sizing constraints, and you will have flashbacks after you decide to throw it away.
-
an Unreal Array is Heap allocated, and structs are also Heap allocated; so a TArray is just a heap allocation miracle when it works. I would strongly suggest the “Block” to be not a struct, it gets slower for lookups (because the pointer needs to be dereferenced), but the memory footprint on the Heap will be worlds smaller.
-
an actor per block is probably still too heavy, and doing stuff in the editor they will probably end up being UObject in the Array that points to the SceneComponent that is the Block.
Dynamic Navigation is a chore, especially in full 3D modifiable environments, and redrawing NavMesh is costly when done too often.
All of this is presuming that you are working with Cube based Voxels, if you start dealing with more complex geometry life gets a lot harder, and you will probably want to look into Meta-shapes where your voxel instead of being cubes could be Spheres, or boxes, you would lose axis alignment, and need potentially a full Transform, but still doable.