So I’m trying to blend animations for my articial intelligence. I want him to attack and run when within range.
The attack function is being called, I’ve printed strings and found that making a delay the same length of the animation montage and printed whether anything is being called. The anim blueprint graph works fine, and for the blended bone I copied pasted the exact bone name I wanted to manipulate, yet it is never called.
If it means anything, the node previewer (the white triangles flowing signifying it’s connected) stops previewing as soon as I call the montage in the AI character’s blueprint.
Attached is a picture of what’s going on
In-game, once the character sees me it runs to me, but again instead of playing the montage, it just forever keeps running into me.
Anyone have any ideas???
I think it has to do with the montage slot group, but It should be working as the slot group slot matches that of the one in the blueprint.
If it runs towards you endlessly then it’s a logic error in your AI Handler (whether you’re using UE4’s standard Tree system or not), not so much on the Montage itself.
In terms of the Raw animation mismatch, make sure that:
- The Slot group Exists
- The Slot Name Exists and is properly assigned
- The Slot is processed by the Animation Blueprint
- The Montage is configured to run on that Slot
- The Layering Per Bone is configured correctly
Also, the image you posted is quite small in resolution; makes it hard to actually see what you have there.
Thanks for responding. Yeah it does use the UE4 Behaviour Tree but when I call it it simulates the correct function (with pictures).
The slot does exist as a drop down as ‘UpperArea’ which is highlighted in another picture.
Sorry for the resolution, here are some zoomed in images of the problems (plus running BT)
If you right click on the images into a new tab it should zoom in
Edit: Ok for some reason it works but now it barely plays the animation or skips it entirely… It’s only when I exit the navmesh that it does the blended animation but at like, half the intensity
To be honest I’ve never liked the Behavior Tree. While it can be great, it forces you to succumb at the mercy of the engine -never a good thing (you’ll understand that better if/once you get working on c++). As a person who is working on making an AI, I’m actually making my own decision and state handler. Lol.
All in all, the half intensity thing is an error in your blending configuration. The Skips it entirely is kind of hard help you with; please specify what’s going on. Furthermore, it could also have to do with another BTtask that you have that is telling it to do something else animation wise (if you’re sure this isn’t the case, no need to address it). In terms of the NavMesh statement, it’s probably due to how you’re handling the animstate relative to the BP and the Behavior Tree.
I’ll probably get to C++ at a later stage. Its just that blueprints feel more simple to use. As for the issue, I’ve reconfigured the AI, re-done the montage and so the only issue that persists is that the layered blend animation is still only doing half the intensity of the animation if that makes sense. Is there a way to scale the blending intensity?
The bigger the blend Depth value, the less intense the animations. In terms of the blending variable, strongest alpha value is a 1.
I’ve tried to set the values as per your image, except the blend pose by bool (not sure what that one does sorry) but the issue still persists, in fact decreasing the blend weight seems to weaken the animation.
Is there perhaps another way to play both animations at full intensity? So I can have the upper body do it’s lunge while the lower half can still run/idle?
Make sure that the blend Poses placed below is what you want to Blend. In other words, the Top Pose (base) receives info from Blend (additive).
The Blend Depth is essentially an option for how many bones down the hierarchy do you want the blend to run from. A Blend Depth 2 will mean that the blend will go down From (in this case) Spine_03 instead of the parameter Spine_01 (in terms of full alpha blending)
The mirror pose node is from a plugin that we use. For your case, you should not worry about it much.
Layered Blend Per Bone is one of the only reliable ways to divide animations into segments. The only other way would be to have the animations as additives if you want them to be forced on a set of bones. Even then, you’d have to modify the animations.
I see, this is a bit of a dodgy work-around, and I’m not sure if this means it’s an issue with the montage or the slot group, but if i drag and drop the attack anim twice into the slot, it works as intended (though it attacks twice). I also set blend depth to 0 just to be safe.
Oh?? So if I have the two attack anims in the slot, it’s not the slot that’s the issue??
Here’s a pic demonstrating what I mean
Like I said, having the anim twice in the same upperslot seems to do the trick, the only real issue is working around doing a blueprint interface call with 2 anims in mind.
If it’s executing twice, it’s an error in your logic (bp wise).
The Animgraph doesn’t have anything to do with this. By Error in your BP, it most likely means one of the following:
- You’re setting the Montage to loop more than Once (whichever event is actually running the montage, Programming BP).
- With how you’re handling the Montage setup (and layer transitioning), you may be creating an accidental loop execution (Programming BP).
- If you have any timers that keep track of the Attack and/or general Character Statemachine, there may be some error in that logic, too (programming bp).
- Another option (likely hybrid and/or related to the three above)
I personally don’t like working with Montages for important things that may change dynamically (Such as attacks/main interactions). the approach I went with was making a component that handled the animations and phases separately so that the system was not dependent on Animation-information. However, in my case -for the needs I required, I made it a 4k+ code-line system lol.
Funny that you mention that; I’m also in the process of recreating a handler for the Behavior Tree. Lol.
I haven’t worked too much with it as I find it great for simple behavior structures but horrible for really complex things. In this case, however, it seems like it’s a good fit for what you’re doing. I would advise against Montages for now since it may be that you’re not yet fully familiar with the ways in with Montages work; you can use Play Animation as Dynamic Montage functions and simply handle the attack timers that way.
Thanks for being patient with me… Here is everything that handles the attack sequence:
Includes the behaviour tree, The behaviour tree task as well as the attack montage which is called in the monster character BP.
No looping on the montage is being done and the closest to looping is the finish execute when attacking.
The montage and even the AI is based off a tutorial series I’m following, and I’m mixing it as I go.
As for your 4k+ code system… I’m nowhere near that skilled, which is probably evident by me not getting the issue here…
Edit: I changed all delay nodes pictured to 1.8 seconds, to see if the animation was being skipped halfway through - that’s not the case, it simply does the half shrug again, waits the remainder of the delay and then calls attack again.
Oh ok neat, So I can still have the enemy run whilst attacking/idle using a dynamic montage function? I’d like it to still use the interface as I intend to have multiple enemy/neutral types that will call this function. Also could I please get an example of how to do said thing?
Think of the game systems as layers (similar to an onion or a planet’s core). Animation-Related Data is one of the Crust-type of layers. Your question refers more towards the Mantle/Inner layers. Animation-Data is can be simply put as a way to visualize already-established functionality.
To answer your questions: yes. The Only “no” option may be if your animations have root Motion enabled. If so, you may need to implement them simultaneously. I strongly suggest locking your animations to the Root and Compute-calculate the 3D World Transforms that such animation could require. This is especially true if you’re working with Multiplayer/network type of projects -but this topic requires a different question altogether.
Your event that executes the Montage would simply execute the other function. If the Float Return Value > 0, set a timer for that. At the end of said timer, check the conditions established to continue the attack with the second phase of the animation. If conditions comply, then just continue (run the second animation and whatever else you may need).