I’m now trying to simulate a shader based imaging radar in a simulator for drones and cars (AirSim). In AirSim, the onboard perception sensors mainly are a camera actor with a few scene capture component with the post processing material being applied to get the final results (segmentation, depth…). And we would like to follow this flow to make the radar.
The current method I get from some radar simulation expert to do it in the fragment phase is as follows:
- get the segmentation map, depth, normals from built-in camera component
- build meshes for seperate object in segmentation map
- computer radar response for each mesh using depth and normals
- store the radar response value(brightness) to 3D mesh at range-angle position, so at( azimuth, elevation, depth )
- do orthographic transformation to get the final 2D radar image.
I’m very new to unreal engine and I really have no idea how I could implement these steps in the a post processing material.
It would be super appreciated if anyone could help me! Thank you very much in advance.