I’m trying to implement a fisheye camera system for Unreal, and since fisheye cameras can have fields of view of over 180° working with regular virtual cameras just isn’t gonna cut it. I need to make an alternative to Unreal’s camera component, one that renders a cubemap every frame, projects that onto a 2D texture the way a fisheye camera would and serve that as its output.
Now I got the entire math sorted out, but how would I go about implementing such a fisheye camera component? Where would I even start? Any clues are appreciated.
Thanks for your input, I didn’t know about player camera managers but will consider using them in the future. It is apparently also possible to set a CamerComponent’s fov to 180° directly in the Blueprint editor.
Not only that, but a FOV of 180° on a regular CameraComponent is borderline unusable. It renders nothing more than a series of rays intersecting at the image center. Below you can see the first person template with a FOV of 180°
Though I do have reasons against using the plugin, so I continued to follow your suggestion. I tried setting the FOV to 170° and apply a barrel distortion post processing material like in the video you have linked. But that causes the center of the image to appear pixelated since it only takes up a small portion of the rendered image and is then upscaled by quite a large factor by the barrel distortion post process effect.
In order to fix that pixelation towards the center of the screen I would need to somehow increase the camera’s rendering resolution beyond the screen/viewport resolution, but I can’t find a way to do that.
The pixelation effect would also be a non-issue if I could render a cube map rather than a single camera viewport. I’m thinking of using a scene capture component cube to render a cube map each frame, then use the formulas I prepared to stitch that cubemap into a fisheye image and treat that as the main camera output.
Only problem is that I don’t know how to do that last part. It seems to me that what’s visible on the screen is tied directly to a conventional rectilinear camera with no way to insert my cubemap-fisheye logic.
If you or anyone else has any clue on how I could extend or replace the default camera by the cubemap camera & shader combination I need in order to implement my fisheye camara system, I would be forever in your debt.
Learnt something new, this might produce some cool effects.
I used to have some related code around for the render target but it appears I have removed it long ago from one of my projects. I think you might learn something from the NVIDIA Ansel plugin but maybe I am overcomplicating things.
Seems like the NVIDIA Ansel plugin has been deprecated, at least it’s no longer part of the engine. I couldn’t find its source code either, but instead I stumbled upon a different image capturing plugin on github with an in-depth tutorial on how it was made.
Its readme explains in detail how you can request rendered images from the rendering thread. This alone is not enough to solve my problem, however.
For now I’ve settled on a different approach: I use a scene capture cube with Capture Every Frame set to 1 and Capture Rotation set to 0 to render the cubme map every frame, then use a regular camera with a post processing material which, using the formulas I compiled, draws a fisheye image from the cube map over whatever the camera would normally see.
Maybe one day I’ll find a better solution. But still, thanks for your support so far @Roy_Wierer.Seda145.
Hello, I’d like to do something similar, and I have made it a material and added it in the post processing material of the camera, but the image the camera captured kept unchanged. Could you give me any hint about how to use the cubemap in the post processing of a regular camera? Thanks!
Ok so first of all you need to make sure you’re not sampling from the SceneTexture:PostProcessInput0 node but from a regular TextureSample node whose texture you have to set to the TextureRenderTargetCube your SceneCaptureComponentCube renders into every frame.
Unlike for 2D textures where the TextureSample node accepts two-dimensional UV coordinates as an input if you use a TextureRenderTargetCube you have to give it a 3D vector as an input. Imagine you’re folding your cube map into a cube again and position yourself right in the center of that cube. You are then shooting a ray into some direction and take the color of whatever pixel of the cube map the ray hits. This is essentially what the TextureSample node does when sampling from a cube map, and the direction of this imaginary raycast is your 3D input vector.
The only question remaining is how you get from your 2D screen position to this 3D direction vector while satisfying one of the many fisheye camera models out there. Unfortunately I’m not allowed to answer this question for you because the company I work for claims any rights to my specific implementation, so you’ll probably have to dig into the maths yourself. In case you’re a university student fith free access to IEEE’s catalog I can recommend the paper Camera-Specific Simulation Method of Fish-Eye Image, which has helped me the most with implementing my fisheye camera in Unreal.
And if I ever write a paper or blog post about the topic I’ll be sure to post it here. Good luck!
I would caution you that cube captures are obscenely slow, like way slower than doing 6 regular 2D captures of equivalent resolution. This is one of the ways I tried to do it, and ultimately trashed it because it wasn’t worth the huge added cost just to have a slightly more convenient setup.
For what it’s worth, the way I set this up was to just do a ray-sphere intersection (with the camera at the sphere origin) to get the cubemap coordinate. You can adjust the FoV by offsetting the camera position. You won’t get a full 360º fisheye effect this way though, probably not much higher than 270º.
A valid solution, I might add. You’ve built what researchers call a stereographic projection out of a sphere, some scene captures and a perspective camera placed on the surface of the sphere, looking inward.
I’m a lot further into this topic by now and even came up with a similar setup in preparation to my bachelor thesis, but I’ve used a Scene Capture Cube instead of 6 scene captures and Unreal’s default sphere mesh instead of a sphereized blender cube. Put a material on the sphere in which the TextureCoordinate gets fed through the ConvertUVToLongLat node and into a TextureSample which samples from the cube render target and you’re done.
I guess your approach is better though because six 2D scene captures render faster than one SceneCaptureCube, plus you can omit the scene capture facing behind the player if your diagonal field of view is below 270° and it’s not visible. Thanks for sharing
I couldn’t produce a fisheye camera view for 120 deg FOV using the latest tips given by you. can u please help me with this. I want to make a fisheye camera view for a vehicle.