I’m loving the way the new camera lag max distance works; you can slow the lag speed WAY down to get a nice smooth lerp, and use tracking to keep the player on-screen. But one thing I’m really finding myself wishing for is a way to specify the (camera-orientation-specifc, obviously) max distance across each individual axis. For example, to allow the player a lot of lateral movement freedom before the camera has to catch up, but to keep the camera tracking closer when the player runs toward/away from the camera, and to lock it to him almost exactly across the vertical axis (e.g when jumping or falling).
I think in many cases this would be desired behavior, at least WRT vertical behavior, as loose camera tracking often tracks vertical position quite exactly in games.
Yeah, it’s a new 4.7 feature where you can limit the Spring Arm distance. I’d like the ability to control by axis as well, both the speed of the lag and the distance.
That’s actually really quite clever, and I might just wind up using it! It just means even more stuff in my MyCharacter Tick path, lol.
But rebuilding the camera lag functionality from scratch in BPs… that’s a good idea. I actually already did the exact same thing with the “orient rotation to movement” functionality since I needed to modify it in ways not exposed to the end user via BPs, I’m kind of baffled that it never occured to me to reconstruct the camera lag in the same way. And you’re right, to create a max distance, you would simply have to make some clever use of min/max nodes to cap the world location of the camera.
Though I wonder if the whole thing couldn’t be done more effectively with Relative Location nodes…
Anyway, the only real issue with this (and the real reason to want an OFFICIAL solution) is that modifying the camera is not the same as adjusting the endpoint on the camera boom. This might seem trivial but it does mean that the way the camera pulls into the player during collision will be adversely affected, since putting the tracking on the boom endpoint itself rather than the camera will mean that camera collision will still pull tight to the player; in fact, I know from experience that moving the camera itself “off” the boom as above can cause the camera to clip through walls and floors since it doesn’t have the boom’s collision to prevent it.
Honestly what might work best is trying to find a way, in BP, to manually adjust the position of the boom endpoint, rather than the camera.
In my situation I’m trying to make similar controls to how the Sparrow vehicle works in Destiny–the camera lags movement behind the vehicle but the forward/backwards movement doesn’t lag as much as the side to side movement so I had to find a way of controlling them separately. In this situation the vehicle is using physics to control movement and then the camera is on a spring arm that’s attached to the root, and then the lag blueprint is applied to the root to lag behind the vehicle.
I had tested a bunch of different ways to try it, including Relative Transform, but couldn’t get them to work. I think the relative position of the Root is at world coordinates anyway.
Anyways, the result is super smooth, very fun. Though I discovered it runs much smoother without the Blueprint open for some reason, had to adjust my interpolation speeds for that.
I’m currently working on a solution using inverse transforms to convert the lerping to SpringArm local space, which is nice because it means you can instead use Set Socket Offset to move the SpringArm end around rather than the camera (meaning the camera won’t be able to clip through world geometry, and will function as intended).
The disadvantage is that this method can’t reliably call the current variable for the FInterp Nodes, meaning it requires a persistent variable in the Actor BP rather than a local one inside the function to handle the position lerping (essentially, it performs a lerp from actor position to actor position, then inverse transforms the SpringArm Socket Offset value by the actor position to create a local space value consistent with that world-space calculation), but oh well.
By design it also can’t truly be DEACTIVATED, so you can approximate deactivation with very high interp speeds (or low max distance values) which lets you smoothly toggle it on and off.
Will post it here when I’m done.
UPDATE: It’s mostly working, but the problem with your approach is that the camera behaves weirdly under rotating conditions. The expected behavior would be for the camera’s offset to remain relatively constant under rotation; i.e. if the player is standing to the far left of the screen and the camera is slowly closing it, rotating the camera should maintain that relative distance between the camera center and the player. Instead, the camera lerps weirdly around because it doesn’t quite understand how to track the player while it rotates.
UPDATE 2: Is it just me, or does your method produce inverted response at 45 degree angles? With a slow interp speed on Y and a fast interp speed on X, I’m getting correct behavior when the camera faces NSWE, but at 45 degree angles it seems to have a slow interp speed on X and a fast one on Y. Not sure why, but it’s clearly what’s causing my issue…
This system is meant to be pulsed every tick. It requires you to create a single Vector variable (Last Desired Camera Pos) which is persistent across the BP (i.e. non-local to the function). It also requires that you DO NOT adjust the position of your follow camera component, as this breaks everything (since I’m using the follow camera’s position to get the world space location of the end of the boom, doing transform inverses for the Socket Offset variable was not working properly). The solution, IF you want to do that, is to instead create a separate Vector variable (I’m using Final Gun Cam Offset since I was adjusting the camera offset for aiming and iron sights, but you can use whatever) and instead set THAT where you would normally set the Relative Location of the follow camera in the BP. Also note the vector float node at the very end (where I add a vector with about 66 Z and nothing else); this vector represents the DEFAULT value of your Socket Offset variable, which you may have used (I did) to adjust the default camera height or position on the player.
The system takes two vector inputs: Lag Speed and Max Distance. Both of these specify control-rotation-specific directions based on YAW in my example (so X is always forward, Y is always side to side, and Z is always vertical, regardless of rotation). The Lag Speed vector adjusts the individual FInterpTo node interp speeds, i.e. lower values are slower lerps. Max Distance uses absolute values to specify a range of Unreal Units distance from the center of the screen the player is allowed to travel before the camera “locks” and refuses to allow him to move further away, i.e. a Y value of 60 prevents the player from moving further than -60 or 60 away from center screen, and if he does the camera will update to keep him at either -60 or 60.
Note that the Max Distance node uses a hard cap applied at the end of the lerping, meaning adjusting that value dynamically at run time will snap the camera around. It is meant to be used with SLOW lerp settings, i.e. you set a lerp value very low for a drifting camera effect, and then use the cap to keep the player from wandering off screen entirely. If you want to pull the camera tight to the player (in my case, I use this for when aiming the gun, to avoid the player’s shoulder obstructing the reticle), I recommend instead drastically increasing the interp speed, using either a Select Vector node, a VInterpTo node, or a Lerp V node. This will cause a more distant camera to “yank” in to the player, which IMO looks better.
Obviously, this works best if you disable the native Camera Lag functionality. As far as I can tell (perhaps UE staff can correct me?) the units I’m using are the same ones Epic use, so you should be able to get identical behavior to current 4.7 camera lag functions by setting 3 values of the Lag Speed vector to the Lag Speed float and 3 values of Max Distance to the Max Distance float from UE’s component defaults, and then modify it from there.
It did that if you use the camera rotation by itself, I had to multiply it by -1 first and rotate the vectors, then apply the interpolation and then rotate it back with the original unmodified rotation.
In my case, the vehicle rotation is controlled by the camera and it lags behind it a bit, and then the camera is attached to the root which then lags the position to the vehicle.
Multiplying the rotator by -1 is the same as unrotating; you unrotate, do your math, and then rotate. Just FYI.
What I ended up doing was a Transform Location based on the camera position, and an Inverse Transform Location at the end. This converts the actor’s location to a location RELATIVE to the actual camera itself. This wound up working very well; if you wander off to the left edge of the screen, rotating the camera maintains that distance toward the end of the screen, no matter how rapid.
hi, i found this thread after countless hair pulling and hours of struggle, im about to try the solution mentioned above but in the screenshot, at a branch node there is a comment where you mention using a different system for 2d, if you wouldnt mind sharing the 2d method with me? or rather just saying whether the function above will work on a 2d template aswell?
your reply would be greatly appreciated and thank you in advance.
Axis-specific camera lag would be indeed extremely useful…!! !! !!
Of course… Epic only plans for 3D shooters, where you don’t normally need it. : <
Edit: I’ve made a simple hack… I’m changing camera lag speed when falling. Same works for Max Distance. If you change these values based on game events, you can have some fancy camera control. It’s not ideal… cuz it will accelerate camera lag on the horizontal as well. But can be tweaked to look sort of decent… : ]