[Twitch] Fortnite In-Game Animation Tips & Tricks and Game Jam Result - Oct. 30, 2014

Yes, the key here is to make sure your foot placement is deterministic. Meaning a client providing it has the same set of inputs as the server (position in the world, slope of the ground, etc.) results in the same foot placement. So all clients and the server have the same view of that actor (ie mesh adjustment results in the same changes).

We did in Gears of War 3. We had foot placement running on server and clients, and was deterministic based on the position of the Actor in the world. We looked at the navmesh for an approximation of the slope. That gave us general posture and and center of mass balancing. Then we did individual traces for each foot to conform to the exact world geometry.

Hope that helps.

  1. Not yet. But is definitely a great idea, and something we should pursue.
  2. yes, Lina Halper is working on something like that right now. not sure when ETA is for though.


I don’t think we have straight support for in the AnimBlueprint. You can feed in the position of the animation. So you could update that position over time based on your play rate.
Another option would be to play through an Anim Montage. You can control the Play Rate on the fly there.



That stream was very informative. Thanks!

Nevertheless, I have a few questions related to the animation blueprint / graph by themselves:

  1. what is the type of the “Evaluate” node? I can’t add such a node in my anim graph.


  1. Is there a specific animation type for poses in UE4 or are they considered as plain animations?

I also have a more general question:

What is your workflow in Fortnite to trigger “one-shot” montages in a replicated environment, and react to events / branching point events?

I did not find any “how-to” about , so based on what I understood in the docs about replication, I came up with flow, but I would like to know if there is another more efficient way to achieve .

As an exemple, I will take a case from the basket ball game I’m developping:

  • A PlayerController pushes a button to make the pawn pass the ball
  • I check if we are on the server. If not, I call Server_Pass. If we are, I call Multicast_Pass
  • In Server_Pass_Implementation, I call Pass, which will then call Multicast_Pass
  • In Multicast_Pass, as it is called on the server and all clients, I play the montage everywhere.
  • I have a branching point in the animation to actually mark the moment the ball should leave the pawn’s hands to be passed to a teammate. I catch “event” and detach the ball from the character’s hand and do the AddImpulse on the ball. But only on the server, as it is the only one to manage the gameplay. The replication will do the rest.

Is “ping-pong” between the player who initiated the action and the server the correct way to do ? I’ve never done any networked game before and I would like to be sure I did understand the flow well enough.

Thanks in advance

Perhaps I missed it while watching the excellent stream (Seriously, it was pretty great) but…

How do you guys handle foot placement ik for the player character? You talk about IK in other regards, but do you have any in-house solution for handling foot placement? The Unreal Documentation isn’t suited to player controlled 3rd person foot ik.



I think there is an example of IK foot placement in the content sample project

You are most certainly correct, however I’m of the understanding that works only on NPC characters and not the User Controlled character. There are issues with the collision capsule I believe. One can pay for a plugin like ikinema, but UDK had a working solution and I suppose I’m curious if something like that is already there and I missed it, or simply hasn’t been implemented yet. Hopefully in the future though?

why would it work with NPC and not with played characters? they have the same behavior, concerning the capsule component. (I wouldn’t understand why they wouldn’t have the same behavior anyway)

Im a little late to twitch stream, but I learnt a great deal so I would like to say thanks!

I do have a related question. I have a skeleton which is retargeted to 3 body types (small/medium/large) I found adding in an additive animation breaks retargeting? All the animation has been done on the medium body type and retargeted to the large/smaller body types. When I apply an additive animation the large/small body types gets deformed back to the medium body type. Could it be the way the additive system works? the additive animation is a simple wind blowing on the secondary joints(such as hair/pouches) and I would really like for to work on all retargeted body types.


Idk, if you guys still reading it, but I have been watching, stream again, and I have one question.

How do you add secondary movement to weapon pose ? I don’t think that part was really explained on stream.

Right I tried to setup something similar (albeit eve more complicated, with dual wield of arbitrary weapons, and each weapon can provide it’s own unique poses).

Right now I tried to apply secondary movement to weapon pose from basic movement animation, but looking at the fact that setup complexity grows exponentially and I need different alpha values to apply proper amount of animation to weapon pose (which might be different when character is aiming, idle or moving). I don’t think approach is valid one.