I’ve heard that it is not possible to set a component’s FOV anymore like we could do in UE3.
This is a huge problem for me, so I suppose I’ll have to build this myself. The problem is, I have no idea where to even start.
Any suggestions on how to approach this would be greatly appreciated, and naturally, if I ever manage to solve this, I’ll make the source publicly available.
Being able to set the FOV of the first-person arms would allow you to make them look ok even when using a large camera FOV.
This is especially important for larger PC screens, where the player is close to the screen. Otherwise, the arms become these elongated appendages which looks very wrong.
JamesG has pointed me in the direction of applying a projective transform to the mesh for rendering purposes in UPrimitiveComponent::GetRenderMatrix().
This makes sense, but my linear algebra is a tad rusty.
Would it be correct to say that the engine first gets the component’s transformations (scale, location and rotation) in world space (also taking into account relative transformations),
and then after that “applies” the Camera’s FOV to the mesh?
If so, does anyone have an idea of how I would pre-emptively “remove” the Camera FOV transform in GetRenderMatrix(), and replace it with a specific FOV?
I’ve done some tests with FRenderMatrix, which seems to give some promising results, but I cannot figure out how to make the transforms generic enough to take into account the actual current camera FOV.
I hope I don’t sound like an insane person rambling about the 9 quadrillionth digit if Pi.
Ok, I’ve made some progress on this, but it’s not quite right yet.
To continue with this soliloquy, my approach is as follows:
Override GetRenderMatrix()
Apply Component’s transform to get the world space location, rotation and scale. The default implementation of GetRenderMatrix simply returns this value.
Apply to the above the inverse transformation of the Camera perspective, to “cancel” out the camera FOV that will be applied at render time
Apply the transformation for the required FOV.
Pass this transformation to the renderer.
There’s definitely something happening , but it’s not quite right.
Can anyone spot the mistake in my math?
// Get references
APWNWeapon* wp = Cast<APWNWeapon>(GetOuter());
APWNPlayerCharacter* pc;
// The current camera FOV that we want to remove
float currFovFull = 120.0f;
// The FOV we actually want to apply to the mesh
float requiredFovFull = 115.0f;
if (wp)
{
pc = Cast<APWNPlayerCharacter>(wp->GetOwner());
if (pc)
{
// Get current FOV
currFovFull = (Cast<APlayerController>(pc->Controller))->PlayerCameraManager->GetFOVAngle();
}
}
// Calc radial FOV to pass into FPerspectiveMatrix
float engineFovHalf = PI / (180.0f / (currFovFull * 0.5f));
float requiredFovHalf = PI / (180.0f / (requiredFovFull * 0.5f));
// Get perspective matrix that will be applied after we send our data and invert it - inversion will allow us to pre-emptively remove the camera FOV
FPerspectiveMatrix enginePerspective = FPerspectiveMatrix(engineFovHalf, 1920.0f, 1080.0f, 1.0f);
FMatrix invEnginePerspective = enginePerspective.Inverse();
// Set up perspective matrix we want to apply
FPerspectiveMatrix requiredPerspective = FPerspectiveMatrix(requiredFovHalf, 1920.0f, 1080.0f, 1.0f);
// Get world transform for this comp and apply inverse and required transforms to it
FMatrix adjTransform = invEnginePerspective * requiredPerspective * ComponentToWorld.ToMatrixWithScale();
return adjTransform;
This code is definitely transforming the mesh in the sort of way I’m looking for, but the problem is that when I decrease requiredFovFull, the mesh actually seems MORE distorted, which does not make sense to me.
Any takers?