I’m really surprised there was nothing to show for progress in coupling generative AI with render engines at GDC particularly by Epic but especially by Nvidia.
With tech like SORA producing UHD video from text input (within minutes), using the frame buffer as input for generative ai to “reinterpret” any way one desires seems like the inevitable last stop on the train but progress seems to have tanked.
Examples of neural rendering / using frame buffer as input to Generative AI:
Enhancing Photorealism using AI
Nvidia Research | Segmentation Maps
Neural rendering is such a huge market to saturate and capitalize on I can’t be a minority in anticipation and interest with it.
It’s as clear as day that the future of everything graphical and visual will be generated in realtime using image synthesis via ai.
Does anyone have any news to share in this regard? Progress Media / Tech demos / Etc.
Thanks.