Hello,
I’m writing with a few questions about the GameplayCameraSystem.
Before the questions, I’d like to say our team has been thinking about the same problems that GameplayCameraSystem is trying to solve, and we’re very excited to adopt it. Thank you for building and releasing this system.
1.CameraShake / CameraAnimationSequence and CameraNode design
Is there any plan to support running UCameraAnimationSequence-based CameraShakes as a CameraNode?
As I understand it, CameraAnimationSequence assets are non-persistent, while GameplayCameraRigs are designed around persistent behavior. I’d like to confirm if this design philosophy is the main reason it’s structurally difficult to provide a dedicated CameraNode that plays CameraAnimationSequences directly.
If such a CameraNode is not planned or is difficult to provide, I’m also wondering:
does playing CameraAnimationSequences through external systems like UCameraAnimationSequencePlayer, outside the GameplayCamera framework, conflict with the design philosophy of GameplayCamera and risk causing problems later?
2.“Final” intended camera setup for LevelSequencer
When using cameras in LevelSequencer, it seems there are two co-existing approaches:
- Use CineCameraActor directly, as before
- Place a GameplayCameraRigActor and control the camera through it
If we want to fully leverage GameplayCamera, using GameplayCameraRigActor feels like the “correct” way. However, in production we hit some issues:
- Values like Filmback, Lens parameters, FieldOfView, etc., which we previously adjusted directly on CineCameraActor, often need to be exposed and handled as parameters.
- Some features are harder to use, or feel less intuitive for designers and artists to tweak directly.
Given these trade-offs, I’d like to know:
what is the intended “final” setup for using cameras with LevelSequencer?
3.Blend behavior when using LevelSequencer + GameplayCameraRigActor
If the intended final direction is “use GameplayCameraRigActor from LevelSequencer”, I see two main ways to handle camera transitions:
- Use CameraCutTrack’s CanBlend option
- Use CameraRig Enter/Exit Transition features
From a design perspective, Enter/Exit Transitions seem like the more correct approach, but we run into these production issues:
- If each LevelSequence needs different blend settings, we may need separate CameraRigs per sequence.
- It’s harder to see and edit blend behavior visually on the timeline, compared to CameraCutTrack’s blend settings.
What workflow does Epic intend here, and are there any planned features or improvements to ease these problems?
(For example, I’m wondering if a CameraCutTrack specialized for GameplayCameraRigActor is being considered.)
4.Accessing DataParameters (e.g., Actor) from CameraNode
Currently, only BlendableParameters are exposed to CameraNodes via the structs in CameraParameters.h. It looks like CameraNodes cannot read DataParameters such as Actor, from inside the node.
For example, even if we define an Actor parameter, there seems to be no way to expose it as a pin on the CameraNode or access it directly from CameraContextDataTable (and related types/functions don’t appear to have UE_API).
Do you have plans to:
- Allow CameraNodes to read DataParameters (like Actor) directly, or
- Expose them as pins on CameraNodes via additional UE_API or structural changes?
Also, regarding PostProcess: right now it seems each CameraNode can only define one PostProcess setting. Is this kind of “static” design intentional long-term, or is there a plan to make this more flexible in the future?
5.Subclassing UGameplayCameraComponent
Currently UGameplayCameraComponent does not have UE_API, so we cannot inherit from it. Is this intentional, or is there a plan to add UE_API and allow subclassing?
Thank you very much for taking the time to read these questions.
[Attachment Removed]