If you take the time to fully read my sentence you’ll notice i said “it’s not well suited for other applications”. And this is a fact. While compute (and CUDA/OpenCL is a part of it) is shifting perspective, the chip design still retains a bit of the classic rasterizer layout. Read up articles by John Carmack on this.
If what you are saying would be true then we would be running a raytracer in unreal engine, not a rasterizer like it is now. While compute parallelize and is very powerful we’re not yet at the point where we can fully utilize that power without going through a classic pipeline, at least for games.
Also please, try not to distort what people is saying just to prove a point, and if you quote what people say please do it fully.
At this moment you cannot even run a GI solution at interactive framerates with octrees (search up SVOGI for instance) because traversing speed is bad, how well do you expect it to perform against a multi-terabyte asset that also needs to be streamed from disk to card? Bottlenecks are in data transfer most of the times. After that bottleneck you need to traverse the octree then perform processing. Do you really think all of this would be that feasible now?
Also, saying “Most of the point cloud will be highly similar data or empty space” is just a wild assumption. There’s not enough data based on games out there to support this statement. You also have no real control of what the users will be creating, and you need to make the solution work for whatever asset your artists are gonna produce, especially in a generalized engine like UE. Look at minecraft, and see how much the human mind can work backwards.
If the whole industry is going in one direction instead of another there’s bound to be a valid reason don’t you think? Have you really researched the pros and cons of what you are proposing while considering the state of things as it is now? Or are you gonna be progressive just for the sake of it?
Frankly, I’d avoid this much arrogance in comments and research some more.