How to use two cameras to render?

How can I have two cameras rendering to the screen?
I don’t want my weapon to be affected by FOV changes and I also don’t want it to clip through different objects.
As a solution, I want one camera to only render the world, then another camera to only render my fps arms on top of it.
How can I do that?

What would be the purpose of this?

In vr with 2 eyes youd make people throw up by doing this.

For cinematics, you just do different takes…

I wrote the reasons in the original post.
Main Reason: I don’t want the weapon to be affected by the FOV changes.
Secondary Reason: I don’t want the weapon to clip through walls.

neither reason makes sense really.
Look at Panini Projection.

Unreal doesn’t really allow “compositing” or even any sort of “sorting” which you would need to achieve what you are asking about in a single render.

If you don’t have a solution, you don’t have to reply.

I think that’s unnecessarily harsh! You can totally get the intended effect.

The original poster wants to render the weapon (and probably arms) as an overlay over the rest of the scene. They want to disable depth testing between the weapon and the rest of the scene, but probably keep depth testing internally within the weapon. This is a typical trick used in old-school FPS games to make things “look nice.”

The two methods typically used to do this in old engines, was either:

  1. clear Z; render scene; clear Z again; render weapon
  2. Set up Z buffer depth to go from 0.05 to 1.0; render scene; Set up Z buffer depth to go from 0.0 to 0.05; render weapon

In both cases, you can change the projection matrix (“FOV”) between the two render issues.

Unfortunately, neither of these will work in Unreal, for various technical reasons. But, you can do it a third way:

  1. Render weapon to offscreen texture
  2. Render scene to screen
  3. Render texture as overlay to screen

This can pretty easily be set up with two different cameras and a render texture.
Getting the FPS players arms into the off-screen texture is also not that hard; it’s essentially the same thing.

Note that any method that renders “differently” will have significant challenges around global lighting. You’ll have to have a full-screen proxy for the player and the weapon, that only renders for shadow and reflections, not for depth, or pixels. You’ll also have to replicate the lighting environment for the texture render. Both of these will probably not interact perfectly with Lumen.

You will need to check out the specifics of render channels and render passes – I know you can disable object render in main pass, which will keep shadows but not render the pixels. However, you probably also need to do something with camera channels to get the stand-in objects to render with a different camera and composite them on top; I forget the exact details.

I do remember that the blueprint-level documentation for this is so-so, and you might want to open up the renderer source code .h files for more details.

1 Like

The solution is don’t use unreal.

It’s not something the engine allows for. Period.

Wether you dislike the answer or not, doesnt change the fact its the only answer you or anyone will get either.
Particularly when behaving like a two year old and randomly flagging posts for supposed community guideline violations where non occurred.

@jwatte
It’s been attempted before, doesnt work.
Render targets and overlays lack belieavibility as well as half the stuff you’d need for a decent render.

Regardless of harshness, it is not against community guidelines to say so.
@mindbrain @SupportiveEntity
Please modetate this individual who keeps abusing the flag system…

Sure! As i said above, lighting and interaction with the world will be a challenge.
But, like, the FOV not changing for part of the scene already lacks believability.
Different people want to achieve different goals, so I’ll let the OP try it and see if they like it.
It may very well be that they don’t.

Tbh even old games that did this generally lack belieavibility.

But the engine itself is severely lacking the required sorting options that would make this, and a thousand other things possible.

This isn’t a topic about adding sorting to the engine though, this is a topic about doing something the engine straight up doesnt want you to do, by design…

Can you do it anyway?

No. Not really.
A render target’s edge pixels will never look right when overlayed to the real render.
Even going for a straight up 8bit look with the art direction, the end result is pretty much a wash when you introduce the overlay.
Sure, you can edge-pixel detect and try to make it work at the cost of performance.
You’ll probably get nowhere though as the cost involved is far from what the end result provides.

The underlying assumpition being that:
IF you want your character/stuff to “keep arms and legs inside the render at all times” and avoid wall clipping.
THEN, you must be looking to render something at a half decent quality…

Therefore, a different engine which allows for ZSorting your final render will be able to do it.
Unreal is not.

Additionally, like originally answered, to avoid FOV changes you can implement panini projection.
So, realistically there is no need to even render the scene separatly…

You can also un-project the weapon/hands in a shader, and then apply whatever projection you want. Eg, if you know that the camera and projection matrices as C and P, and you have your model matrix M, the default transform T can be thought of as:

T = M * C * P (D3D notation conventions)

But, if you multiply your T with P-1 * C-1 (the inverses of the camera and projection transforms), you get:

T = (M * P-1 * C-1) * C * P which in turn ends up with just M. So, you can then multiply in your own chosen C2 and P2:

T = (M * C2 * P2 * P-1 * C-1) * C * P

So, you now need to set the transform of your object to M2 = M * C2 * P2 * P-1 * C-1 when you render. Because Unreal doesn’t let you directly affect the matrix, you have to pass that in as a separate matrix to apply in the (custom HLSL) vertex shader.

What this will do to lighting, I have no idea! But it might be fine. You could try.

Light is calculated after the vertex shader, and the vertex shader effectively displaces geometry, so the light will look as if it is calculated onto whatever object you displaced.
Its why Panini Projection works too.

I dont think you need to bother with untransforming the object. I think you just base calculations over what you already have knowing what the FOV is in order to make it cost effective. There is more to panini projection than just swapping FOV anyway.

There are also some drawbacks aside performance that go with it.
None of them are as bad as rendering a render target on top of the actual render…

Panini is entirely screen space – it’s essentially a render target warp effect. So, lighting will be done in perspective non-warped space, and then the framebuffer will be warped. This is why Panini in Unreal can look blurry in the center, unless you increase “screen coverage” to over 100%.

… we may be a bit far away from the original question, now, though :slight_smile:

It’s done with vector displacement in most if not all implementations I have ever seen.

On one project for equipment the character was using in first person POV, like weapons, tools, etc. we used a scaled down mesh attached close to the camera inside the character collision hull. That avoided any wall clipping issues.

As for FOV, could you maybe adjust the tiny equipment relation to camera, and/or distort the mesh scale, based on FOV changes?

Throwing enough math at the problem might get something reasonable looking at different aspect ratios or FOV? :sweat_smile:

Right, but then they wouldn’t have to require that you turn on screen upscaling to use it, nor would you get the black corners and fuzzy center problem. In Unreal, it’s implemented as a pixel space warp, as far as I can tell.

This post process effect is done in the Upscaling pass.

They even document that Unreal Tournament does it differently, with vector displacement:

Another way to use the panini projection is through a Material function outputting a world position offset to be plugged-in into the material’s world position offset input pin. This actually what Unreal Tournament uses instead of rendering the weapon at a different FOV to fix the perspective projection

I wouldn’t use the built in.
The math is farily simple and readily avaliable to push in a custom hlsl node.
If you parameteize it via a MPC you end up getting a material function that when applied to items will automativally move them onto your preset FOV settings.

Since you are doing vector displacement you can then also push/pull the verts anyway you want.

Probably a necessity on larger fov, so as to have more weapon on screen…

Essentially this