(Not sure whether this is the right sub forum for this…)
I’m trying to drive an effect based on the position of the camera (HMD), but I’m getting a strange lag on the effect which I wasn’t expecting, and don’t know the cause of.
The basic setup is two scene objects:
- One point with a static position in the scene (Near_Point)
- One point which is movable, based on the relationship between the camera and the first point (Far_Point)
For visualisation purposes, a static mesh sphere has been attached to each point. Near_Point is red, Far_Point is blue.
Essentially, the Far_Point should always appear directly behind (at a certain distance) the Near_Point no matter the position of the camera. In other words, the three positions (camera position, Near_Point, Far_Point) are always in a straight line.
The BP starts by getting the camera position:
Then on tick, the vector from camera to Near_Point is normalised, multiplied, and added to the position of the Near_Point to update the position of the Far_Point.
This works, BUT there is a noticeable lag in the updating of the position of the Far_Point. If I move my head quickly, I can see the position of the Far_Point struggling to catch up.
Here is a video illustrating the point:
Anyone know what might be causing this lag? I would have thought that the position of the Far_Point would always be precisely in line with the other two points, but it seems almost like there’s some kind of lerp going on here.
Very confused, any help/ideas much appreciated!