Would really like to see proof.
I’m not a 3d modeler but basically a pixel can only represent one triangle of a mesh, and one pixel of a texture. So when you have tons of detail on a mesh far away from the digital camera, it can only sample 2 things and misses tons on triangles and pixels in-between.
Let’s say real life was made triangles and texture, a real life camera would chroma sample light from a far and pack hundreds of triangles into one pixel representing that far away mountain etc.
On the texture side, shader can by bass the pixel limitation because you can feed in the original data from the texture the render sampled for the shader.
We can’t so that with meshes. City Sample’s buildings are insanely high poly, without TAA to fake chroma sampling or SSAA, you can find insane amounts of moirés pattern on objects from a trivial distance because of how much information and geo representation the render misses per frame.
Here is a small example of a high poly mesh as it displays moirés pattern as too little pixels can represent the data.
They need to make LODs that have the geo dense parts flattened for Mipmaps or for shader based chroma sampling to take effect.
At around 0:36, he shows a close up, the moirés is gone because enough pixels can sample the geometry.
Optimized models would have those inscribings done via fake parallax occlusion with those parts of the mesh being flat or just have LODs that focus on flattening the model and decreasing the detail as the camera distances itself.
This is why nanite marketplace assets are worthless to games.
Most game models start off with high poly and have to be optimized for both performance and visuals.