In an effort to alleviate LWC/DF usage in our materials, we are trying to move away from using the Absolute World Position node in favour of the Camera-Relative World Position node. A consequence of this is that systems that use MPCs to expose positions now have to expose them in camera-relative world space, which is proving difficult.
I am using UGlobalStatics::GetPlayerCameraManager to get the camera’s location in my subsystem’s Tick method, and subtracting it from my position vector before exposing it to the MPC. Tests with our tech artists show that doing this introduces a one-frame lag between the MPC value and the actual camera location as accessible in the material. Is there a recommended way to let C++ code expose dynamic camera-relative world space positions to materials?
The one-frame lag is due to the one-frame difference between game thread vs. render thread.
One approach you can try is to use a custom Scene Uniform Buffer to inject the data into your materials using an FSceneViewExtension with a custom Scene Uniform Buffer.
With this approach, you can use the FSceneView that will be used for the render during the frame as opposed to being one frame behind and setup the buffer in the PreRenderView_RenderThread override.
There is a community tutorial that might be helpful to use the approach I suggested. Keep in mind this material is not from Epic Games.