Hi!
Nice, using StateTree with GPC seems to be more popular than I expected. I sadly haven’t tested it much yet since all the projects I’ve been working with so far are either going with the Blueprint director. Of note, the Game Animation Sample Project uses a Chooser table with the Blueprint director. The StateTree director needs some attention and misses a few features… but I have it scheduled for a few improvements for 5.8.
> Regarding the “sandwich” approach (`Old BaseRig -> `ActionRig` -> `New BaseRig`), has the team considered supporting a workflow where the old `BaseRig` is reactivated (un-paused/popped back into focus) instead of pushing a completely new instance?
I haven’t considered it much since the current design was already locked down in my mind from a previous camera system I implemented at a previous company
But the reality is that moving things in the blend stack like this leads to issues. For instance, consider three different camera rigs: A, B, and C. You start with A, then go to B, then quickly to C, and then back to B. With the “new instance” design, the blend stack at that point is [ A1 - B1- C1- B2 ] (that is, an instance of A, an instance of B, an instance of C, and a second instance of B). If you were instead to interrupt B and re-push it on top, you would go from [ A - B - C ] to [ A - C - B ] on that last activation, as you move B from the middle of the stack to the top. And so the order in which the outputs are blended gets suddenly changed: instead of blending A and B, and then blending C on top, you blend A and C, and then B on top. The overall result of the blend stack “pops” in some non-obvious way. I believe there is an old GDC presentation from the Uncharted team at Naughty Dog that briefly explains the problem, and how they also just went to an “always push” model for their blend stacks. Some animation systems do the “interrupt and move” approach, but those tend to be limited to a small number of entries in their blend stacks to avoid the problem I described. I didn’t want to have this limitation since I know some fast-paced games can have more than half a dozen entries in a blend stack when things get intense.
There are a couple things you can do to fix your issue. The first one is to look into whether the AbilityRig could be done instead as a global Camera Modifier. The Base, Global, and Visual blend stacks are different types of stacks compared to the main blend stack. These are called “persistent stacks” in the code because the rigs running in there stay there until explicitly removed. A rig reaching 100% above them doesn’t pop anything. You can also insert and remove rigs freely, although of course you better make sure they blend in and out correctly to prevent bugs. There are Camera Modifier APIs to start and stop things like these. When you stop a Camera Modifier, it blends out before removing itself. If that happens mid-blend-in, it interrupts the blends and reverses it. This is possibly a good solution if your skill ability feature should effectively “take over the camera” for a short time -- since these things run in the Global stack, they are layered on top of whatever happens in the main stack, which may or may not fit your use case.
The second option requires some custom code, but you could implement a custom reversable blend for your AbilityRig, so that when it blends up to 10% and gets interrupted, your blend would go back to 0%. At this point you would have a 0% non-contributing entry in the blend stack, and it would stay there until someone higher up the stack reaches 100% and pops everything below, but it would visually and logically speaking be what you want.
The last option requires custom code *and* is pretty involved, but I designed the system to more-or-less not be dependent on the exact implementation of the main blend stack. If you look at the default root camera node evaluator, that’s the one that knows about these blend stacks. The rest of the system only knows about “activating” and “deactivating” rigs. This is because I know some people may want to customize the heart of the system, possibly running with a different setup than the default one with the 4 blend stacks (1 push-only stack, 3 persistent stacks).
Does this all make sense?
> How would you approach this kind of decoupled-then-realigned camera movement natively within the GPC framework?
The GPC rigs don’t care about what the GPC component is attached to. If you run a rig that is only, say, an offset node [X += 10] (i.e. a rig that just moves the camera 10 units to the side), we don’t want to run this logic from an absolute position, such as the world origin. So by default, the rig evaluation “starts” where whatever “owns” it is. In the case of a rig running from a GPC actor/component, that’s the component’s location. And so if the component is on the character, or on some other actor in the level, the rig runs from there. And then it moves the camera 10 units to the side. But the rig could completely ignore that and move the camera elsewhere. You could place a GPC rig actor in the sky, but run a rig whose first node moves the camera to a specific location, or attaches the camera to the player pawn, or whatever.
So you can have rigs in your character cameras that go and attach to some other actor, or some bone, or whatever else you need. You could have a rig that looks at where the last rig’s result was, and goes there, and then stays there even if the character moves. So while you may need to write a custom node in C++ or Blueprint or something, you can totally “detach” your camera rigs from whatever actor/component they’re being run from. Does that answer the question?
Cheers!
Ludo.
[Attachment Removed]