Dec 2, 2020.Knowledge
What would be a reasonable way to interface the gameplay ability system with weapon states?
- In Fortnite, when a weapon is equipped we check the weapon being unequipped and remove any abilities or Gameplay Effects that were granted by the weapon. Then we apply any Gameplay Effects for the current weapon (ex. a movement speed debuff for very heavy weapons) and grant the player abilities such as primary and secondary fire, reload, etc.
- Our base weapon class has a pair of virtual functions called PressTrigger and ReleaseTrigger. These are used to support various weapon firing types (single shot, burst, auto, charged shots, etc). When the weapon is actually fired we activate either the primary or secondary fire ability on the instigating actor. Hits are determined by the weapon, or projectile, and then passed to the damage ability as targets. We override the functions related to ability costs to support using various types of ammunition when we fire. Because firing happens so often we don’t use gameplay cues for things like muzzle flash and hit FX. Instead these are all triggered from the base weapon class with weapon specific data to modify the visual and audio output.
What is the roadmap for the gameplay ability system?
- The ability system was written with Paragon and Fortnite as our test cases. It is also used by other internal projects. You can ignore the warnings about it being experimental. We’re well past the stage of making breaking changes. We don’t have a set roadmap for the ability system. The most recent updates were improved documentation along with the ARPG sample and improvements to gameplay cues. I’d guess that the next significant area we’d improve is providing an example and better documentation for using it in a networked environment. There are still plenty of rough edges but so far none of them have been important enough to get to the top of our priority stack.
Is getting damage amounts on the client using gameplay cues the correct approach?
- No, Gameplay Cues are not guaranteed to reach the clients and may be dropped if network traffic is heavy. You can use something like PostAttributeChanged to update the clients.
When should Gameplay Cues be used?
- Gameplay Cues should only be used for non critical gameplay events or effects. They should be used for things where it’s okay if they don’t get replicated to other clients because they are not guaranteed and may be dropped depending on network traffic. You also need to be mindful about RPC limits when using Gameplay Cues. For example, we don’t use Gameplay Cues for weapon effects in Fortnite because they are too frequent and would create too much networking overhead.
Can you execute gameplay cues via the Cue Manager directly?
- You can, but the biggest advantage of using gameplay cues is separation of the cue and its execution. You should use tags to trigger cues.
Is using inheritance in the Abilities System a good pattern in terms of reusability?
- It’s similar to other code and depends on the game. We use a mix of inheritance and composition.
How to handle death in the Abilities System?
- One approach is to use attribute set data to trigger abilities and apply tags. For example, you could have a health attribute that is used to determine if death should occur on PostGameplayEffectExecute.
How do you build generic relationships/systems using tags?
- Use a combination of tag layout and tag queries. You should have someone own the tag structures at a high level to protect the tag layout.
How do you think about adding tags vs using Gameplay Effects?
- Gameplay Effects generally modify attributes and add/remove tags. They handle replication and prediction/rollback.
Should we use Gameplay Effects to define states?
- No, you should use tags to represent state and Gameplay Effects to modify attributes and add/remove tags.
Is there a good way to make reusable GameplayCues? For example, a GameplayCue that plays a ParticleEffect and a Sound. But each ability that plays the cue wants to play different effects. Do we need to make a new GameplayCue each time, or can we set something up to share code?
- There’s not a built in way to do this. You could create a more general Gameplay Cue that considers subtags to allow some flexibility, but you’ll need to maintain a mapping of tags to Effects to play.
What is the best way to pass parameters to a GameplayCue?
- You can derive from FGameplayCueParameters to pass in more parameters. This struct is intended to be extended for project specific needs.
What is the best way to replicate data generated on the server-side of an ability to the client-side of an ability?
- Gameplay abilities aren’t generally replicated. You could have the client kick off an effect that triggers the server to initiate a separate gameplay effect(s) that would be triggered on the server and the client.
What is the best way to coordinate random number generation between a local client and server ability?
- You could use the prediction key as a seed, but it’s just an incrementing int. Could also replicate GamePlayEventData, or override GameplayAbilitySpec.
How would you build a combo ability where local player input can advance the combo?
- Two approaches to consider would be to either apply multiple tags with different durations to define the combo window or use notifications in the montage to tell us when to add or remove the tag that says we can do the next step of the combo.
What is the best way to implement root motion in an ability? Should we be using a montage that has root motion, AbilityTasks that use root motion, or something else?
- Here’s a knowledge base article that includes some detailed info: https://udn.unrealengine.com/s/article/A-holistic-look-at-replicated-movement
- You can use animation root motion by playing montages through the AbilitySystem and it’ll be synchronized between client and server. This is typically how projects start and is a more familiar workflow to animators/gameplay people. Totally fine to do that.
- If you want more gameplay programmer/design control over the root motion, they can trigger Root Motion Sources (basically custom root motion determined by C++ code) through the existing “ApplyRootMotion*” AbilityTask nodes and it’ll be handled for you. There’s several provided Root Motion Sources that are just about all the ones we used on Paragon except for a couple one-off special/custom ones.
How would you recommend setting up ability animations for a character that uses a FirstPerson mesh (arms only) on the local client and a ThirdPerson mesh (full body) everywhere else?
- There are several ways this could be approached. Abilities could emit properties, and 2 different AnimGraphs can read from and trigger animations (1st person, and 3rd person). Or you could emit animation data that’s dynamically linked into AnimBPs. (it could be LinkedAnimBPs, or Montages). If you share the same skeleton, you could even emit a single Montage asset that contains 2 tracks: 1 for the first person anim, one for the third person anim.
When changing an ability’s input (FGameplayAbilitySpec::InputID) on the server and marking it as dirty through the AbilitySpec, it doesn’t replicate the input change down to the client if the ability is marked as a server only ability. How should this be handled?
- We handle this type of case by granting and removing abilities when an item is equipped or unequipped. For slottable items we have some indirection so we tell slot X that the player has used whatever input is associated with it and slot X is responsible for passing that info to the equipped item.
What kind of data should I expect to be replicated about an enemy’s abilities. For example, if swinging a sword at an enemy and they have an “evasive” ability can I check a tag, or ability?
- You can check the tag. PreGameplayEffectExecute() is a good place for that because it returns a bool that says, “Did this even work at all?”. So for evasion, you might want to have the whole thing fail right there.
When dealing damage, is the client simulating that and able to run with it and will the server correct them, or is it the case that you wait for the server to come back with the results?
- The client will move ahead with results on gameplay effects. We have similar rollback there as we have with the abilities as far as (client)prediction goes. It’s linear and keeps track. The server will tell the client, “I’m up to number x” and the client will kill everything up to that point because the server will replicate everything down.
I noticed that clients never enter the PostGameplayEffectExecute() function, it only fires on the server. Is that expected?
- If you are locally predicting an instant gameplay effect, we automatically make the duration infinite. A duration based event will go into the aggregator and will never finish executing on the client. The server will do the correct thing and execute it as an instant effect and replicate it down. That’s how we do the server rollback. Use PreAttributeChange() to update clients on locally predicted effects instead of PostGameplayEffectExecute(). Using a RepNotifiy is also okay.
What is the difference between executions and modifiers in the context of deductions?
- They both do deductions. The difference is that executions have to have code to back them up. Generally modifiers are preferred, executions were added because the modifier framework couldn’t support Fortnite’s complex damage formula. Modifiers are preferred because it’s less code to maintain and it’s clear how they are applied.
What is the difference between DOREPLIFTIME and DOREPLIFETIME_CONDITION_NOTIFY and when should one use DOREPLIFETIME_CONDITION_NOTIFY with prediction?
- DOREPLIFETIME will only replicate the value if it has changed since the last replication. DOREPLIFETIME_CONDITION_NOTIFY with a condition of COND_None will always replicate the value.