VR pix2pix pipeline

Hi,

I am curently working on my semester thesis for atelier classes in architecture. This year I interest myself in AI and it’s possibilities in architecture. What I am trying to accomplish is a live vr experience of a simple scene (no shadows, no textures just colours, simple shapes). The rendered image into VR should serve as a label image for my pix2pix network. Now It is no problem for me to create the scene, nor to get the network working. My questiou is: How would one go about the pipeline that sends current view (in vr glasses) as an image into my network and displays the translated image back to the vr headset in real time instead of the original one. Does anyone have any idea?

Thank you very much!