I come from the film side, and we dealt with very similar issues with the advent of 3D films, etc. One thing that was surprising to learn is that we as humans are not totally reliant on binocular vision to build our perception of depth: we take into consideration many other cues in addition to right-left eye fusion. If the distance between two objects is small enough relative to the viewer, then we don’t actually rely on binocular vision to perceive the difference in depth, but instead rely on other cues such as light and shadow. GREAT list of depth cues here: GREAT list of depth cues here: https://en.wikipedia.org/wiki/Depth_perception
This explains why normal maps will still work perfectly well, because the relative distance between say, a tile and the grout between each tile is so little that we rely on shade/shadow to perceive it and not binocular vision.
Interestingly this applies to things that are further away than 30 or so feet as well. When we look at the grand canyon, we can tell the far wall is further away than the near wall but it’s not because we have binocular vision.
Ultimately, this could have some really cool implications for VR rendering! I could see engines implementing a scenario where you render out the dominant eye frame and then only redraw things closer than a certain depth when building the frame for the second eye.