Interesting thought; with enough high detail geo you could bake texture data right into the vertex colors.
After running a small scale test I found that was no noticeable loss in quality and there was a save of 6MB~ in file size decrease and 99 shader instructions on an asset that has a quarter of a milion tris with diffuse and spec baked in vs texture samples (DIFF/SPEC/NORM at 4K).
And there are other things to explore like channel packing, stylized rendering, and perhaps even VR when support comes around.
I’m very much interested to see how VR will benefit from some mindful use of these new systems and techniques that will stem from them.
I wanna know what other people think, viable? or Nah? predictions for VR?
Wouldn’t this end up breaking down at distance when Nanite reduces the triangle count of the mesh, though?
Have you tried seeing what happens as you zoom out from the object?
so long as parts of the object does not have 100% planar regions it actually does a good job preserving detail, and as an interesting side note on a flat subdivided plane Nanite does try to preserve high contrast detail in the vertex colors.
Definetly sounds like a promising workflow. Its something I have yet to try in UE, can I ask what you mean when you say ‘vs texture samples’? (nvm: just too early in the morning, i take it the vs is Vertex Shader)
I tried to implement such a system in UDK and UE4 (as a hobby project) but it didn’t work quite well.
Care to share some screenshots of the progress you have made with your test?
its what a texture node is in a shader graph. a single texture sample has about 10 shader (GPU) instructions. its what makes RGB channel packing so effective