FPS. Must camera be parent to arms? For FPS shooter to work?

Hi there!

Must the camera be the parent to your characters arms in order to make sure he/she is aiming where you look?

And do you do so with the head allso? Right now I fused my FBX robot into one mesh, meaning head, arms and body are on and the same object so right now i don’t have the ability to set my camera to parent over arms or head.

Is there an other way to do this?

What technique do you use to make sure the character is firing with precison at the cross hair/middle of the screen? Do you set up his arms/weapons while animating to look at something 100m away or do you specifi that inside unrealengine somehow?

This is actually an interesting question in a lot of ways. I think the short answer is, “depends”. :slight_smile:

Traditionally, in FPS games, the arm model is, in fact, parented to the camera, which locks the view and makes weapon aim easy to align. The Unreal way of handling shadows and reflections is to have a second full-body mesh that’s invisible to the player camera, but casts shadows and reflections and which is used for third-party view (like representing showing the player in multiplayer games to others, doing cinematics, etc. AimOffsets are used to put the arms in roughly the position you would expect, but it doesn’t have to be precise because it’s only used for shadows or in other players’ views, not to actually aim and fire.

However, this approach falls apart if, for example, you want more than just arms visible in the player’s view. If you want to include legs and torso, for example, so that when you look down, you see more than empty space, you have to use another approach.

In our current prototype, we have a camera attached to a socket on the model’s head bone. This works well for everything except aiming. Right now, our gun and hands kinda bob around. The approach we’re planning to try next is to parent their weapon to the camera and then use IK chains to attach the hands to the gun. We haven’t actually implemented it yet, so I’m not sure how well it’s going to work. May be tricky when it comes to doing things like reload animations.

Would love to hear how other people have dealt with this, though.

well…player have to be educated, for all the years we think that bullets shoot from between our eyes. It’s gonna be pretty hard to adjust to a more realistic settings(ie. what do you mean I can’t shoot, I have perfect line of sight, blah)

for jeff’s aiming, my suggestion is that if you introduce a aim down sight so it provides some sort of animation and time send anim notify, then you can do the aiming pretty easily. People could assume during hip firing the flawed aiming is normal.
(ie. delay between button hit and actual firing is not acceptable, but delay caused by ADS animation is more accepted.)

But, if you are trying to do a twitch shooter, it’s gonna be a pita, as you need to make the aiming synced every frame to where crosshair is at.
So a proper setup of animation is required to blend between your animation and the required position for gun to aim at where you look at.

Fortunately, the vast majority of time in our game, player won’t be shooting from hip. It’s a sniper game, so if you’re in close quarter combat, you screwed up and are paying for it. We have ADS already and it works well (needs a little tweaking, but not bad), but the gun swaying when you’re not scoped is still on our list of things to solve. Original approach was to do it he way it’s done in shooter game, but one of our requirements is that torso and legs be visible if you look down.

I’m building a sort of moba fps, where you play as robots that shoot stuff directly from their arms so ironsite isn’t anything that I will use.

I figure my best bet is to do like you said Jeff, defuse/deattach my mesh so i can make sure the arms are an child to the camera. But does this bring “problems”/cons if you still want to see your body if you look down? I imagine I would have 2 meshes, head and arms combinde to one as a Child to the camera, then the rest of the body as another mesh parented to the collision thingi?

This is similar to what jeff describes:

The animations used are standard UDK third person animations, with a custom animation tree.
When aiming, a bone controller is used to bring the weapon to position offset from the head bone, which you need customised for each weapon.
The hands follow via IK.
To ensure a perfect sight picture, the final camera view is fudged slighly until it lines up.
For anyone interested, here is a link: TFP Demo.

I recently found out this is similar to how Crysis 2/3 handles its first person weapon setup:
Place weapon where you want it, use IK to match up hands.

On the other hand, we have Ground Branch:

Animation offsets are used for each weapon position, but the camera is blended between the eye position and a dedicated ‘aim bone’ whenever the player engages the weapons sight.
A dedicated bone was used for several reasons:

  • to control what animations affect the players aim.
  • to support additional weapon positions.
    i.e low/high ready.
  • to support multiple weapon sights.
  • to help the sight picture.

As with the first approach, the final camera view is fudged slighly to ensure a perfect sight picture.

The problem of sway while aiming can be attacked a couple of different ways.

The easiest is to use a static pose while aiming and add any sway via code.
You can see the results of this system here:

For Ground Branch, we use multiple locomotion animations, but I plan to add this sort of system as well as control the play rate of the idle animation.
In the meantime, people testing our tech build are having a hard time hitting things accurately at long distances :stuck_out_tongue:

As well as this true first person viewpoint setup is working for Ground Branch, it has room for more control and improvement, but we’re sitting on that until after we get something out to the public.
We’ve got enough work to do :stuck_out_tongue:

Wow, Kris. Thank you for typing that all up. Extremely helpful.