Hello guys, thanks for taking the time to respond. I think there has been a misunderstanding. I’m well aware that real world realism is not achievable, but I’ve seen recent research that uses synthetic scenes in Unity and Unreal Engine (e.g. AirSim for vision). So the scenes are real enough, even though they’re not totally real. Also I don’t need one huge scene to scan. If I have many maps, each of them 1 km^2 or even less, that will be perfectly fine. I just need it to be as real as possible -> Also not only rendering wise, but also content wise. I don’t need space ships, aliens and fantasy castles. I need maps of neighborhoods, forests, and other stuff you would expect to see in the real world.
An example of a similar work is this research paper [1804.00103] A LiDAR Point Cloud Generator: from a Virtual World to Autonomous Driving. researchers scripted a mod for GTA V, which put a LiDAR scanner on top of a car, and sent the collected data to a server. It was used to train self-driving car technology and was found to be beneficial. I want to do a similar thing - mount an airborne lidar scanner on the bottom of an airplane in GTAV, but that’s another story.
Also no worries, you are not rude at all. Yes, of course real world scenes would be the best, but here is the problem. When you scan actual real world environments, you get 3D scans, but they are not annotated. This means that while you know you hit something, and can determine its location and more, you don’t know what you actually hit. Sure, some datasets are annotated, but they are few (not enough), and if they were annotated by an algorithm (call it A), then that algorithm surely doesn’t have a perfect accuracy. This means that my algorithm (call it B) would be bounded in accuracy by A.
You can imagine that it’s different with scenes in unreal engine. Say I implement a LiDAR scanner with ray casting. When you cast a ray and it hits something, you are able to retrieve the hit object. If the hit object is say an instance of house, then I can annotate the point with being a building. So the advantage of synthetic scenes is that you actually know what you are hitting, which allows me to get perfect training data.
The goal is to then train an algorithm on the synthetic data, and use it to predict the annotations of the real world data.
Also I have already built a system. It works correctly - I use the UE4 point cloud plugin to render the result every couple of frames. It needs some tweaks, but soon it will be ready. As I’ve seen that there was interest for such a tool on these forums, I’ll post it on here soon. I just need actual scenes to scan.