Ok so first of all you need to make sure you’re not sampling from the SceneTexture:PostProcessInput0
node but from a regular TextureSample
node whose texture you have to set to the TextureRenderTargetCube
your SceneCaptureComponentCube
renders into every frame.
Unlike for 2D textures where the TextureSample
node accepts two-dimensional UV coordinates as an input if you use a TextureRenderTargetCube
you have to give it a 3D vector as an input. Imagine you’re folding your cube map into a cube again and position yourself right in the center of that cube. You are then shooting a ray into some direction and take the color of whatever pixel of the cube map the ray hits. This is essentially what the TextureSample
node does when sampling from a cube map, and the direction of this imaginary raycast is your 3D input vector.
The only question remaining is how you get from your 2D screen position to this 3D direction vector while satisfying one of the many fisheye camera models out there. Unfortunately I’m not allowed to answer this question for you because the company I work for claims any rights to my specific implementation, so you’ll probably have to dig into the maths yourself. In case you’re a university student fith free access to IEEE’s catalog I can recommend the paper Camera-Specific Simulation Method of Fish-Eye Image, which has helped me the most with implementing my fisheye camera in Unreal.
And if I ever write a paper or blog post about the topic I’ll be sure to post it here. Good luck!