About GameplayCamera Design

Hello,

I’m writing with a few questions about the GameplayCameraSystem.

Before the questions, I’d like to say our team has been thinking about the same problems that GameplayCameraSystem is trying to solve, and we’re very excited to adopt it. Thank you for building and releasing this system.

1.CameraShake / CameraAnimationSequence and CameraNode design

Is there any plan to support running UCameraAnimationSequence-based CameraShakes as a CameraNode?

As I understand it, CameraAnimationSequence assets are non-persistent, while GameplayCameraRigs are designed around persistent behavior. I’d like to confirm if this design philosophy is the main reason it’s structurally difficult to provide a dedicated CameraNode that plays CameraAnimationSequences directly.

If such a CameraNode is not planned or is difficult to provide, I’m also wondering:

does playing CameraAnimationSequences through external systems like UCameraAnimationSequencePlayer, outside the GameplayCamera framework, conflict with the design philosophy of GameplayCamera and risk causing problems later?

2.“Final” intended camera setup for LevelSequencer

When using cameras in LevelSequencer, it seems there are two co-existing approaches:

  1. Use CineCameraActor directly, as before
  2. Place a GameplayCameraRigActor and control the camera through it

If we want to fully leverage GameplayCamera, using GameplayCameraRigActor feels like the “correct” way. However, in production we hit some issues:

  • Values like Filmback, Lens parameters, FieldOfView, etc., which we previously adjusted directly on CineCameraActor, often need to be exposed and handled as parameters.
  • Some features are harder to use, or feel less intuitive for designers and artists to tweak directly.

Given these trade-offs, I’d like to know:

what is the intended “final” setup for using cameras with LevelSequencer?

3.Blend behavior when using LevelSequencer + GameplayCameraRigActor

If the intended final direction is “use GameplayCameraRigActor from LevelSequencer”, I see two main ways to handle camera transitions:

  1. Use CameraCutTrack’s CanBlend option
  2. Use CameraRig Enter/Exit Transition features

From a design perspective, Enter/Exit Transitions seem like the more correct approach, but we run into these production issues:

  1. If each LevelSequence needs different blend settings, we may need separate CameraRigs per sequence.
  2. It’s harder to see and edit blend behavior visually on the timeline, compared to CameraCutTrack’s blend settings.

What workflow does Epic intend here, and are there any planned features or improvements to ease these problems?

(For example, I’m wondering if a CameraCutTrack specialized for GameplayCameraRigActor is being considered.)

4.Accessing DataParameters (e.g., Actor) from CameraNode

Currently, only BlendableParameters are exposed to CameraNodes via the structs in CameraParameters.h. It looks like CameraNodes cannot read DataParameters such as Actor, from inside the node.

For example, even if we define an Actor parameter, there seems to be no way to expose it as a pin on the CameraNode or access it directly from CameraContextDataTable (and related types/functions don’t appear to have UE_API).

Do you have plans to:

  • Allow CameraNodes to read DataParameters (like Actor) directly, or
  • Expose them as pins on CameraNodes via additional UE_API or structural changes?

Also, regarding PostProcess: right now it seems each CameraNode can only define one PostProcess setting. Is this kind of “static” design intentional long-term, or is there a plan to make this more flexible in the future?

5.Subclassing UGameplayCameraComponent

Currently UGameplayCameraComponent does not have UE_API, so we cannot inherit from it. Is this intentional, or is there a plan to add UE_API and allow subclassing?

Thank you very much for taking the time to read these questions.

[Attachment Removed]

Hello! Thanks for checking out the GPC plugin.

  1. Yes, there are plans to run Sequencer-based shakes as part of the Camera Shake Asset tools, but no ETA on that
    1. Technically yes, the Camera Rig Assets that run in the global/visual layers are meant to be “persistent” (at least until you remove them) but that’s not the technical limitation. There’s already a couple of those persistent rigs that can house “short lived” rigs that emulate the “camera modifiers” and “camera shakes” concepts… in the latest version of the GPC you’ll actually find APIs for this (StartGlobalCameraModifierRig, StartVisualCameraModifierRig, StartCameraShakeAsset, etc.)
    2. There is no real technical limitation here though. You could run a short-lived camera shake or camera modifier as a global layer camera rig. In fact, camera modifiers and camera shakes are basically camera rigs (although the new Camera Shake Asset is sort of a more specialized version of a camera rig). Nah, the problem is just that you’d have to write the code yourself to check on that camera rig every frame and see if it’s “finished” so you can remove it, or keep track of it via its instance ID so you can remove it later. This is what those APIs I just mentioned exist: they do this for you, so you can “fire and forget”, effectively.
    3. In summary: everything is camera rigs and camera nodes. The extra bits are just helper APIs and container camera rigs to make it easier to work with.
    4. The lack of a Sequencer-based camera shake node is simply that I didn’t get around to it yet, and all of my current customers have put a lot of other features much higher on their lists, so I’m tackling things in order.
    5. You could take inspiration from the existing Sequencer-based camera shake pattern (for the old/legacy shake system) in order to create an equivalent one for the new GPC shake assets… I don’t foresee any problems except the fact that Sequencer requires a fair amount of code to get setup correctly.
  2. Correct, for cinematics you can use standard camera actors, or “procedural cameras” using the GPC Rig Actor.
    1. Note however that using GPC inside Sequencer is completely experimental. I’ve implemented it but it has only been tested with simple demo situations. Once again, none of my customers have gotten there yet, and 95% of the our focus so far has been on gameplay cameras. I’m expecting procedural cinematic cameras to become the focus of my development in a few months, though, as my customers start tackling NPC conversations and scripted events and so on.
    2. Correct, the GPC Rig Actor would have to use a rig that exposes the parameters you want to animate in Sequencer. As mentioned in the previous point, I haven’t iterated much on this, hence the rough state of the workflows. I’d like to discuss the topic a bit more though so feel free to start an email thread with me at ludovic.chabant[Content removed] reach out and we can dig into it.
  3. For blending in and out of cinematics, it’s a bit tricky.
    1. If you are NOT using a GPC Player Camera Manager (PCM), then you can’t ever use the GPC transition system. The PCM just blends between view targets using its limited built-in list of blend curves, and GPC can’t do anything about it. Each GPC component is a standalone camera system, and blending from one to another is again limited to view target blending. GPC transitions are only used for blending between rigs inside the same camera system. And the Camera Cut Track is just a glorified call to SetViewTarget, which is the standard engine API for this… the Camera Cut Track doesn’t know about GPC. So basically, in this case, you can only use the Camera Cut Track’s easing.
    2. If you ARE using the GPC PCM and you have 5.7 at least, then maybe we could do something but it’s not clear what. The GPC PCM runs its own camera system and if you activate GPC components with the dedicated APIs (ActivateGameplayCamera/DeactivateGameplayCamera) then you basically gain the ability to use GPC transitions between GPC components and actors… If however you let these GPC components and actors activate on their own and run their own camera system (bRunStandaloneCameraSystem) then you go back to each of them being treated as a black box, and being blended as whole, using whatever blend is given to SetViewTarget. It’s… a bit complicated, but I’ve gone over this stuff a few times and I don’t know if it can be implemented any better than what I ended up with in 5.7 (although I’d love to get suggestions for a better way!) So what does that mean for Sequencer? Well, I probably wouldn’t do a custom GPC-enabled Camera Cut Track, but I could do some sort of PCM-registration system with Sequencer to make the Camera Cut Track able to call other APIs besides SetViewTarget (such as the GPC PCM’s ActivateGameplayCamera). I haven’t had time, or the customer requests, to do this yet though. And even then it’s not clear if it would be worth the trouble, since, like you said, we would replace a simple workflow where you can simply see the timing and curve of your blend in/out, with some convoluted dynamic system that picks some transition you don’t know about until you run the game. So no plans on any of that for now -- the Camera Cut Track ease in/out are good enough for the foreseeable future IMO.
  4. I have recently improved support for data parameters to be passed into nodes. So you should now be able to pass a number of types into a camera node, from actor pointers to string and enums and so on.
    1. In theory the way to do this is to add a “CameraContextData=true” metadata to the UProperty, and then add another UProperty of type FCameraContextDataID that is named the same as the first property but with “DataID” as a suffix. See for instance the “Attachment” and “AttachmentDataID” properties on the AttackToActorCameraNode… if it doesn’t work with actor pointers then it’s a bug and you can send me some more details.
    2. What do you mean that each camera node can only define one post-process setting? In the code, yeah, there’s only one FPostProcessSettings struct that is passed through the nodes as part of the OutResult struct, but each node is free to mess around as much as they want, setting dozens of values in there. Then when the overall output of that camera rig is blended with another camera rig, those set values are blended with whatever vlaues are set by the other rig. Is there a case where this doesn’t let you do what you want?
  5. We could add UE_API to the GPC component yeah (in fact I thought it had it). In general we tend to not put API macros everywhere because (1) not everything needs to or should be subclassed and (2) we actually are running against compiler limitations on symbol tables because of the ridiculous amount of code we have in the engine…
    1. What sort of use-case do you need sub-classing for?

Cheers!

Ludo

[Attachment Removed]