How can I use Ray-cast without causing much burden on the speed?

Hello all,
I am currently developing a lidar sensor model using Ray-cast. A 16 channel sensor rotating at 10Hz should produce 300000 points. I can use 300000 Ray-cast operations to get this result but the performance is very weak (frame rate falls to 8 when motion is given to the sensor). Is there any other way I can do this?


  • After scanning certain points (384 to be precise), a UDP packet is sent along with time stamp. (Cant wait for one frame to get result as is in case of Asynchronous traces). Trace data is sent to an algorithm that runs on real time systems.

Try async traces:

Well, the benefit of the game engine is that you know/have lidar-like data already known in advance in form of renderer depth buffer, which you could output by something like SceneCapture component with the resolution matching/approximating the point density of the desired lidar sensor. That would be the most efficient way to go about it. I am not sure how much that would be considered cheating in terms of simulation of real world lidar. It depends on what exactly you want to use the simulation for :slight_smile:

Hi BrUno,
Thank you for the response.
I cannot use Asynchronous traces. Sorry for the incomplete requirements. I have updated my post.

Hello Rawalanche,
Thank you.
Could you please add few more details or pass some materials which can help me doing what you have said?

Whatever your blueprint is, add SceneCaptureComponent to it, that would serve as kind of a lidar camera. In the capture source, select either SceneDepth or DeviceDepth based on your requirements. Create a new render targe asset of desired resolution and plug that in the SceneCaptureComponent rendertarget slot. Then parent the SceneCaptureComponent on the rotation pivot which rotates the lidar camera and set the SceneCapture FOV to match your desired FOV. You will get a render target with depth values. That’s as far as I know how to go.

There’s other, more involved part where you will need to reconstruct the actual location points based on the transform of the SceneCaptureComponent and the depth point. Technically, getting the location of the point should me a matter of multiplying forward vector of the SceneCaptureComponent with the depth of the given RenderTarget pixel, and then adding that to the SceneCaptureComponent world location/position vector.