I’ve been trying for the past week to see where in the code this memory transfer happens, but I’m not very familiar with the endgine and it’s just been too big and confusing to me, so I thought I’d be better off asking for guidance instead of slamming my head against the wall.
The problem I’m facing more specifically is that I’d like to share data about the View (the StaticMesh is just an example) with another process using CUDA. I’ve successfully implemented this, but currently I’m copying the data from CPU to GPU inside my own class (performance hit). Seeing as the data is needed by the GPU for rendering, I was wondering if there was a way to hijack (or extend using the SceneViewExtensions classes) the engine to upload the data to cuda and the buffer used by the render thread at the same time.
(Since I know UE doesn’t use CUDA, the idea would be to create CUDA interop buffers with the rendering API and then share those through IPC, if possible).
The biggest problem so far though has been, finding where the data gets uploaded to GPU memory in the first place. The codebase is just too big and I can’t seem to find and parse the code well enough to find any of the stuff I’m looking for.
To tie into the when question, I’m also wondering if the data gets uploaded each frame or only changed when needed or only on map load etc. I imagine reading the code would answer these, but again I can’t find it.