MotionWarping: Warp Targets and data caching.

In 5.6, some nice new features were added to FMotionWarpingTarget to provide some welcome new behavior, however, the implementation doesn’t seem to play very nice with the warp target struct being constructed independently, through for example a split struct, or via an API that takes the struct itself, like AddOrUpdateWarpTarget(const FMotionWarpingTarget& WarpTarget).

The EWarpTargetLocationOffsetDirection functionality relies on the calls to

CacheOffset(Transform);
RecalculateOffset(Transform);

which are only called in the FMotionWarpingTarget constructor

If you create the struct in blueprint for instance, and then add it via

UMotionWarpingComponent::AddOrUpdateWarpTarget(const FMotionWarpingTarget& WarpTarget)These function won’t be called, and the functionality you expect to occur will be broken.

The helper functions

UMotionWarpingComponent::AddOrUpdateWarpTargetFromComponentuses the constructor, and so should work, but there seems to be this gaping hole in the use of the struct itself that provides a code path for broken functionality, if you construct a FMotionWarpingTarget yourself and register it directly, and not through a helper function like the above.

Perhaps a custom make function is in order here?

Second, why is AvatarActor parameterized through the warp target rather than the URootMotionModifier use the owning actor and the target transform? Also based on the math it does in the code, I think it should be reversed EWarpTargetLocationOffsetDirection::VectorFromTargetToOwner should be VectorFromOwnerToTarget

Third, this new functionality is great and is welcome, however the implementation can be improved. Because EWarpTargetLocationOffsetDirection seems like it ought to be something that only alters the location of your warp target, but instead since it internally manipulates the target transform of the warp, it has side effects of breaking your rotation warps, forcing you to use multiple motion warp targets where you previously used 1.

For instance, we were commonly motion warping to the target actor, with the facing rotation type. It was an easy approach to ensuring orientation towards the target for purposes of attack animations and stuff, but if you use EWarpTargetLocationOffsetDirection, this rotation mode basically becomes useless, as close up melee attacks could be offset laterally or even backwards if you intend to maintain spacing offset. But what you don’t want to occur is for this offset to cause your rotation to turn away from the target. The existing rotation modes make sense, but to supplement the new offset modes, I think there needs to be some rotation modes that work on the non offset target transform, so you still have a mechanism to offset location without borking rotation.

Edit:

Also just noticed that I don’t think Epics own code dealing with cancelling follow accounts for offsets. This code needs to call FMotionWarpingTarget::GetTargetTrasform(sic) and store that result in Location/Rotation like the FMotionWarpingTarget

constructor does, before turning off follow, or else it doesn’t account for the offset functionality

[Image Removed]

Hi, just a quick update on this. I’ve looked into the first issue that you raised around the constructor that’s being used when the struct is created in blueprint and it does look like we’re going to need a custom make function to deal with that. I’m looking at getting that implemented at the momement. Once I have something working, I’ll follow up on the other issues.

Ok, I have some changes with a custom make function implemented. It’s shelved at 45081372 if you want to test it out. The existing Make nodes should automatically be upgraded to the new custom make function. But just be aware that if that fails for any reason (you should see errors or warnings in the log when any bp that contains one of the Make nodes is loaded) that this will be a breaking change. So make sure to test any of the modified blueprints just in case. I’ll do further testing in Fortnite once I’ve looked at the other issues that you mentioned.

Hi, sorry for the delay in following up. It took a while to get the time from the relevant folks on the dev team to sit down and discuss the various requests here.

In general, the feeling is that we want to be able to more easily support the kind of functionality that you’re talking about (modifying the target, supporting separate targets, etc), but not necessarily via subclassing FMotionWarpingTarget. We discussed the 5.6 changes with the addition of the target offset, and particularly the way the offset is applied. The feeling is that we’ve gone down the wrong route with that since, like you’ve mentioned, once the target is being modified internally within FMotionWarpingTarget it’s logical to then want to subclass that functionality. But the dev team want to keep the motion warp class as the point of extensibility in the system. So the thinking is that we’ll refactor the code that was added in 5.6 and move the target offset logic into the warping code. But the important part is that the modification of the target will need to be extensible via the warping code, so that you can customize it, and also so that there isn’t a need for duplication of code for different warps. And with no engine modifications required.

We are also going to change how the switch-off conditions work since, as you noted, the current implementation is clunky and doesn’t fit with the rest of the target system. What that ends up looking like needs more thinking about, though.

Since it’ll likely be a while until we do this refactor, the suggestion in the short term was that you could do the work in game code to generate the target transform and then just set that on the motion warp target each frame. I know that’s not a great solution since it’s not encapsulated within the warp or target, but it’s apparently how we’ve dealt with the same problem of wanting to have more complex targets in Lego Fortnite.

I also asked about sharing the ballistic warping code, but there are some issues specific to the Matrix demo that means we can’t share the code, unfortunately. It’s the same problem that stopped us from just shipping it along with the skew warp implementation.

No, you aren’t missing anything. The points that you raised are all good ones. And in fact, we’ve run into similar issues with replication of the warp targets on some of our internal projects. So these are all things that we are thinking about improving in the future. It just may be a while until the dev team get to working on these kinds of improvements.

Sweet, I appreciate your quick response.

No problem. It turned out I didn’t actually shelf that CL, but I have now, so the files should be visible for you.

In terms of the other things that you flagged up:

> Second, why is AvatarActor parameterized through the warp target rather than the URootMotionModifier use the owning actor and the target transform? Also based on the math it does in the code, I think it should be reversed EWarpTargetLocationOffsetDirection::VectorFromTargetToOwner should be VectorFromOwnerToTarget

I’ll need to double check with the dev team why we don’t just use the component to get the owning actor from. Possibly it’s historic reasons if the component was optional at some point in the past (and location/rotation could just be set directly like they are via the make struct node).

In terms of the vector, looking at the math, it’s the target location subtracted from the actor location. That should give us the vector to the actor from the target. Then we project along that by the location offset’s X value to move closer to the actor. So I think the naming is ok here.

> Third, this new functionality is great and is welcome, however the implementation can be improved. Because EWarpTargetLocationOffsetDirection seems like it ought to be something that only alters the location of your warp target, but instead since it internally manipulates the target transform of the warp, it has side effects of breaking your rotation warps, forcing you to use multiple motion warp targets where you previously used 1.

I see that the behaviour that you want here isn’t currently supported. If I understand correctly, you want to effectively have two targets, one for orientation and one for location, so that the location can be offset without affecting the orientation target. I’ve added a task to add support for that since I see how that could be useful, particularly for the facing direction with an offset target example that you mentioned.

Was this working for you at some point in the past? What version were you on previously? I looked at the code prior to EWarpTargetLocationOffsetDirection being added, but from what I could see, we were still just working out a target transform and basing the location target and orientation on that. So I would have expected the same problem if you had an offset from the target.

> Also just noticed that I don’t think Epics own code dealing with cancelling follow accounts for offsets. This code needs to call FMotionWarpingTarget::GetTargetTrasform(sic) and store that result in Location/Rotation like the FMotionWarpingTarget

constructor does, before turning off follow, or else it doesn’t account for the offset functionality

Yeah, this looks like an oversight. I have another shelf with a fix for this that you can try out. The CL is 45098310.

Thanks for flagging up all these issues.

> VectorFromTargetToOwner…

Yep. My bad. I got them mixed in my head.

> Was this working for you at some point in the past?

Yea, our melee attacks for our AI have always used warp targets for several years(since early 5.x). It’s a simplistic setup. The warp target is the enemy root component with bFollowComponent=true, and the Facing mode for the rotation to they are looking at the target. It’s simplistic because the attacker can never actually reach the target due to capsule collision, so there is a spacing maintained that facilitates a reliable usage of the Facing rotation mode. That still works, but if you want to use the new offset functionality, you no longer have a reliable source for the facing vector calculation.

Actually, this brings me to another request. We saw in 5.6 the addition of the UMotionWarpingSwitchOffCondition class, that facilitates user extensible logic to cancel/pause transforms, follow, etc. This is really cool. I would love to have a similar system provide an extensible class to provide custom calculations for the warp target itself. UMotionWarpingSwitchOffCondition can’t manipulate the target transform(or an intermediate copy of the target transform). If there were an extensible mechanism to performing calculations on the warp target for intermediate state, it would be a huge benefit project level extensibility. We have several game features that likely require intermediate warp target manipulation. Couple examples

  • Melee combat advancement feature where we want it to track dynamic targets(bFollowComponent), with offset, but we want limits to the range before it breaks off. UMotionWarpingSwitchOffCondition can cancel/pause warps, but it can’t freeze the warp transform, but the key to the implementation we are after is that the timing is not changed of the warp window, only that if the target exceeds the tolerances during the warp, the warp transform be locked at that extent, but still run.
  • Another example is that for certain AI abilities, I’d love to be able to use warp targets for the target of an attack, say a leap attack, but I want to give it limitations on how fast it can track, so having a place to calculate intermediate transforms for a speed limited warp target transformation would be very useful, so the actual warp target starts at the target location, and “seeks” to the target every frame, rather than follow it perfectly. In the context of a leap attack, it would basically emulate a tunable air control during the leap warp. bFollowTarget=true is a perfect follow. I’d be implementing more of a “seek” behavior of the warp target with configurable speed parameters.
  • Calculate warp targets with velocity leading taken into account

You’d be MVP of the year if you could convince the powers that be that this would be a worthwhile addition. That would put power in the developers hands to do custom treatment on warp targets at the project level. Offset is just the sort of feature that should be calculated at that level. Adding these type features individually is surely welcome and better than not having them at all, but ya’ll won’t be able to cover all the different ways projects might want to do custom treatment of warp target calculations, from the raw state on the FMotionWarpingTarget, through any filtering or secondary feature application, to an intermediate transform that the system actually uses.

> That still works, but if you want to use the new offset functionality, you no longer have a reliable source for the facing vector calculation.

That makes sense, that if you try and use the new offset functionality it’ll affect how the facing direction functionality is applied. The JIRA task that I mentioned previously is just waiting to be triaged by the dev team at the moment. It may be that they want to add something to support different targets, although it’s also possible the recommendation may just be the workaround that you already found of having two motion warps active.

In terms of customizing the warp target functionality, I can see how making the motion warp target extensible could be useful. We would need to weigh that against complicating the API/blueprint-UX, as users would then need to specify a target type.

The question for each of those three customizations would be whether they could be achieved via a custom warp type, rather than customizing the target behaviour? We’ve used other warp implementations internally that have been more complex than the simple skew warp implementation. It feels like it might be possible to do that for each of the examples that you mentioned:

  1. Use the regular offset target transform, but just control when the warp is applied by checking the distance to the target
  2. Take the regular offset target and calculate the intermediate target from that. This sounds vaguely similar to a ballistic warp implementation that we used in the Matrix Awakens demo for the characters jumping from vehicle to vehicle
  3. You could calculate the velocity from a cached target location from the previous frame, although that would mean a frame’s delay

Part of the problem at the moment is that the addition of the offset (and specifically how the offset is applied), along with the stop/pause conditions, have blurred the lines between what the responsibility of the warp target object should be vs the warp itself. I think this all needs a bit of thinking about to make sure we end up with a system that’s extensible but not confusing from a UX point of view.

I’m meeting with some of the dev team later in the week, so I’ll discuss all of these points further with them at that point. But it would be good to hear if you think those custom warp targets that you mentioned could possibly be implemented via custom warps, or whether you think there are specific requirements that mean the warp targets need to be customized.

> although it’s also possible the recommendation may just be the workaround that you already found of having two motion warps active.

I really hope this isn’t the case, as this over complicates the usage pipeline back into game code, and doubles the replication overhead with 2 targets where there was 1 for every single warp.

> Use the regular offset target transform, but just control when the warp is applied by checking the distance to the target

This is a exactly what we don’t want to do. It changes the duration of the warp. We want the warp to continue, but to a calculated transform at the extent.

> Take the regular offset target and calculate the intermediate target from that. This sounds vaguely similar to a ballistic warp implementation that we used in the Matrix Awakens demo for the characters jumping from vehicle to vehicle

I can definitely see that a warp that is meant to emulate physics really should be a separate warp type, not a simple manipulation of the warp(Can yall share this code btw?). But still, there is an element of intermediate manipulation on the warp target transform that applies to use cases outside of this that should be available in any warp.

> You could calculate the velocity from a cached target location from the previous frame, although that would mean a frame’s delay

Right. The problem is there is no way to inject this manipulation into the system. That is the request. The math is easy, there’s just no extension mechanism available to inject it in, short of engine modification. Like UMotionWarpingSwitchOffCondition, maybe a UMotionWarpingTransformModifier object that can be extended to provide project level extensibility.

I suppose you could subclass URootMotionModifier_SkewWarp at the project to add features like this, but you’re still having to make engine mods to do it, because you really need to inject logic higher than that, in URootMotionModifier_Warp::Update. Also, I would say this is a hack, because most of these mathematical manipulations you might want to make on the target transform itself are not really related to the warp type. A transform manipulator that accounts for target velocity, would be no less useful in URootMotionModifier_Warp than it is might be in URootMotionModifier_WarpBallistic. Baking transform manipulation into warps falls down as soon as you have multiple warp types, but transform calculations are tied to specific ones. It’s a violation of separation of considerations that will just cause future problems. I think it’s very important that it be its own thing. All it is, is something that sits between the raw FMotionWarpingTarget, and the transform used by whatever warp is running. It should not be tied to a warp type, as it is not changing the behavior of the warp type, only manipulating the transform.

It would be called after

FTransform WarpPointTransformGame = WarpTargetPtr->GetTargetTransform();

in URootMotionModifier_Warp::Update, and then the warp would just operate on that transform. Manipulating the transform really has nothing to do with the warp logic.

> Part of the problem at the moment is that the addition of the offset (and specifically how the offset is applied), along with the stop/pause conditions, have blurred the lines between what the responsibility of the warp target object should be vs the warp itself.

I agree. The offset functionality should be one of these hypothetical UMotionWarpingTransformModifier implementations. But also, it would be important that this hypothetical UMotionWarpingTransformModifier API support multiple manipulators. In our examples for instance, we’d want offset and some custom ones, so ideally we’d provide an array of them, or it would support as much via something like the UMotionWarpingTransformCompositeModifier, like yalls UMotionWarpingSwitchOffCompositeCondition.

The biggest(awkward) paradigm shift to me with these new additions is the idea that these UMotionWarpingSwitchOffCondition are objects that must be pre-registered, and that because they are tied to warp targets by name, they require you to establish your warp target names and condition logic up front, as global effectors. We went from a completely free form structure of just using warp target FName, and all the behavior being configurable on the anim notify, to now if you are using the conditional stuff, you have to define target names and conditions out ahead of time, and they apply globally. If we’re going to move in the direction of establishing warp target parameterization “globally” by name, the warp targets might as well be an asset, rather than FName based. It’s no longer just a name. It’s a name and a set of functionality globally tied to that name.

Thanks for the follow up.

That all sounds great. We can press on with our own custom engine extensions for the time being, but we’ll be looking forward to this refactor. Is there an issue or something I can subscribe to, to get a notification when these changes go into the main branch?

If I may, regarding the switch-off condition implementation. I think it would at least make more sense to make those objects an edit inline on the UAnimNotifyState_MotionWarping, as that is currently the location where all other functional configuration exists for the warp. I think the same applies to functionality like offset. The offset functionality was placed on the FMotionWarpingTarget, which seems completely wrong. Having it live on FMotionWarpingTarget means the registrar of the motion warp target has to have intricate knowledge about how the motion warp behaves, in order to parameterize it with additional features like this. It also bloats up FMotionWarpingTarget, which is a replicated object, but that is secondary.

However as soon as we start thinking about tacking more feature configurability to the UAnimNotifyState_MotionWarping, we immediately run into the undesirable situation that you are going to end up with a ton of redundant state across many different animations, and for this, it starts to look more appealing in my opinion to turn motion warps into assets. Like data layer assets are to the world partition system, so too could motion warp targets be standalone assets. As an asset, it would consolidate what would otherwise be a lot of redundant behavioral configuration across many different animations, into 1 asset, saving a ton of work on the part of the user, and also eliminating any need to ever worry about motion warp behavior being inconsistent across as set of animations. As an asset, you immediately have a mechanism for easy extension or modification, even by blueprints.

In principal, FMotionWarpingTarget should just be a wrapper around a static or dynamic transform. It minimizes the replication overhead, and puts all the responsibility of what to do with that transform, even whether it should be treated as static or dynamic, on this future hypothetical configurable manipulation pipeline. Maybe I want a leap attack to use a static target for translation for the leap(with offset), but use the target as a dynamic for rotation, so the enemy leap attacks an old position but air controls their rotation towards a dynamic target. A firm separation of responsibilities I think is key to making this a powerful and extensible system.

Thanks, this is useful. I’ve passed the info onto the dev team. I agree that making the motion warp itself into some kind of reusable asset, rather than requiring it to be specified directly via the notify on the montage, would make the system more extensible while removing the need to duplicate a whole load of data. Whatever implementation we end up going with, it’s going to need a good deal of design to get something that’s fit for purpose.

Unfortunately, we don’t have anything that I can share to allow you to track the status of this work. The public JIRA tracker only allows for the sharing of bug tickets at the moment. But if you’re curious about the state of the work, you can always start another EPS thread and just reference this one, and I’ll give you an update. I think it’ll be a while before the dev team is able to look at this since they have a pretty full slate of work for the 5.8 release already.

I’ll close off this thread since I think we’ve covered everything (you can always reopen it if there’s anything further you want to discuss). I’ve committed the fixes that we talked about originally, you’ll find those in the 5.7 release.

Another question I am interested in your opinion on, that I think is an example of a problem that it would be nice to be able to do entirely within the motion warp system.

Suppose you have an attack animation that is made up of the following.

  1. A stationary 0.5 second anticipation period
  2. A small 0.25 second leap period(before the character appears to leave the ground)
  3. Leap animation
  4. Landing

The desire is that during step 2, you want to do a rotation only motion warp to face the target. This means that

  1. Either you register the warp transform via notify event just prior to the warp window(to account for dynamic targeting up to that point)
  2. Register the warp transform ahead of time, with bFollowComponent = true, probably just prior to playing the animation, and just before the warp window starts, use a switch off position to CancelFollow, thus locking in a static target. This ensures there is enough time for the warp target to replicate to clients.

#1 seems problematic timing wise in a multiplayer environment. Latency could easily exceed the entire window of the rotation warp, causing the client to be well into or even past a warp without having the updated warp target replicated, and worse, when this occurs, the motion warp is going to use old data from the motion warp target list, if you don’t remove warp targets perfectly.

#2 seems like a better approach, however, this leads to more functionality needing to be available to a user within the motion warp system.

The same thing that makes #1 problematic is basically the same thing that makes doing your own warp target transform calculations externally problematic. You could have an animation event that triggers the application of a static warp target transform just prior to the warp window in the animation, but in a multiplayer environment, it’s not going to work as reliably as if you gave it that information a half second before it was needed, likely bundled with the replication state of the very data used to instigate the playing of the animation.

To illustrate the replication issues consider the timing of these 2 approaches

  • Method 1
    • Gameplay ability activates, it plays the montage
    • Montage play replicates via GAS
    • Use an anim notify to trigger a commitment to a warp target with AddOrUpdateWarpTarget, pretty much simultaneously with the warp window it is needed in.
    • Warp target replicates independently, with some number of frames already elapsed that it was needed but not present/updated
  • Method 2
    • Gameplay ability activates, it plays the montage, you also set the warp target on the same frame
    • Montage play state replicates via GAS, bundled with the motion warp target
    • Because the state is bundled, it already exists on the same frame the montage state kicks off the animation

Method 2 avoids warp windows with bad warp targets, since it facilitates setting the warp target basically on the same frame as you play the animation.

If my intuition here is correct, if you agree that you really need to register warp targets ahead of time, to give them the opportunity to replicate before the motion warp actually occurs, we’re pretty much left with this broader need to have a pretty flexible suite of functionality for calculating the warp target transform that is usable by the server and the simulated clients that use the replicated target transform.

For example, I want to put a min/max limit on how much rotational warping is allowed here, relative to the starting transform, meaning that during the anticipation period of the animation, just prior to the rotational warp window, the bFollowTarget warp target can be anything, including the target moving behind the guy, but I only want the warp rotation to be able to alter the initial rotation by +- 30 degrees or whatever. This suggests new rotation parameters that probably use the URootMotionModifier::StartTransform as a reference from which to limit/clamp elements of the warp transform, in this case the rotation. Some of the earlier examples speak to this as well, where if you want to put limits on how much warp is allowed, or even to perform your own interpolation of the warp points. This is some functionality I am looking to implement now, but I noticed that the URootMotionModifier::StartTransform is modified throughout a warp via URootMotionModifier_Warp::OnTargetTransformChanged, so it’s not reliably a transform snapshot from the point the modifier becomes active, as it is commented to be. StartTransform will be modified throughout the warp, every frame the target transform changes.

There are countless ways that a motion warp would be desired to be mathematically limited. The min/max adjustment distance of a leap attack, clamping the rotational adjustment for a rotation, etc, so it’s important that this future system facilitates all of that. If I’m getting wrong any of my understandings or intuitions of the pipeline, and of the replication elements, please correct me. It’s very easy to just dismiss this sort of thing as something like, well just set your own transform each frame if you are going to do something fancy with it, but as soon as you bring replication into the mix, that approach falls down. Sometimes that might be necessary, but I think there is a wide category of situations that can be accomodated entirely client side, the same way the newer offset functionality provides.

Hi, I think that all of the points that you’ve made here are valid. The replication issue is definitely going to cause issues if you’re calculating the warp targets on the server each frame and then having them replicate down to the clients. If we eventually move to the model that we discussed previously, where everything is done via the Root Motion Modifier class, then that would allow you to calculate the behaviour on the client. But that doesn’t help you in the short term with the work that you’re doing at the moment.

So one suggestion that you could look at for now is to change the replication behaviour on UMotionWarpingComponent::WarpTargets so that it uses the replicatedUsing meta tag to specify a function where you can choose to ignore the replicated data or not. You could then, for dynamic targets, run the external/game code calculation for the target on the client. I know that it’s not a great solution, but hopefully it gives you something that is better than the current behaviour of calculating the target on the server.

I can also look at making a change to how URootMotionModifier::StartTransform is calculated so that we have a copy of the original transform that isn’t modified when the target is recalculated, would that help give you what you need?

Hey [mention removed]​ , I don’t need anything short term. I am good with making my own local engine changes for our needs to get us by. I just wanted to provide feed back for these things in hopes they can be accounted for in engine in a more official capacity with this future work, and also to verify my intuitions about the pros and cons of different approaches, in case I am missing something about the implementations and/or the interaction with other parts of the engine(replication, etc)

Sorry to bump this again, but one more issue that just came up with some AI work I am doing, related to motion warping

We have melee attacks that have motion warped advancements in them, and we’re experimenting with switch off conditions to pause the root motion of a specific warp if the actor hits something(like another actor, a wall, etc). For use case, think of a lunge attack, or a series of melee strikes forward as the character moves forward.

These pauses to root motion should be scoped to each individual warp window, but the implementation of the switch off objects have the state bWarpingPaused and bPauseRootmotion are properties of the FMotionWarpingTarget, so there doesn’t really seem to be a way to do anything at the scope of the warp window in the animation. Do you know of something I may be missing that enables this?

I suppose I could use discrete warp targets(uniquely named) the same data, but that’s not super ideal either. I’d still need some mechanism available in the switch off object to determine what window I am in, so I can selectively return the

Looking at how the warp windows are implemented in UAnimNotifyState_MotionWarping, looks like it should be possible to derive a custom notify class from UAnimNotifyState_MotionWarping such that in the OnRootMotionModifierUpdate, I could update some state on URootMotionModifier to suppress/pause the root motion, and it would be scoped to the specific modifier, and not global like the switch off conditions work.

The other thing that concerns me about the condition stuff and these state elements on the warp target, is that these boolean states aren’t replicated. Is that not potentially problematic?

Hi Jeremy, sorry for the delay, I’ve just been looking again at these issues today. You’re right, there’s an assumption with the switch off condition code, effectively that the warp targets aren’t shared. But they can be, and it sounds like that’s what you’re doing - multiple root motion modifiers that are using the same target? In that case, you will run into problems with the current code, meeting the requirement to pause the target of one modifier will effectively pause all the modifiers that use that target.

Given this and all the issues you had previously with customizing the target transforms, it might be simpler to just create a new root motion modifier type where you copy most of the code from the skew warp but change how the target and switch off conditions are applied so that you can do those in a completely independent way of the existing implementations. I know it’s not a great solution, but I think your use cases are more complex than most projects that have used the motion warping code.

In terms of the bools not being replicated, I think that should be ok since the targets on the component will be replicated. And the modifier itself should then pull the updated values from the target on the next update.