I’m implementing a tool in UE4 that needs to perform a lot of collision detection and in the process I need high performance collision detection. I have tried the asynchronous collision detection interface provided by the engine such as UWorld::AsyncLineTraceByChannel, but the performance is still not up to expectations, so I hope to use GPU-based collision detection.
As far as I know, GPU-based collision detection is available in Niagara, I tried this solution, but there are two problems. The first issue is that syncing particle data into blueprints always seems to crash when the number of particles is large. The second problem is that Niagara in UE4 only implements collision detection based on global SDF when using GPU simulation. In my application scenario, sometimes this accuracy is not enough. I noticed that Niagara in UE5 built collision detection based on hardware ray tracing, so I wanted to know some technical details about this technology.
Here are some of my doubts, I hope they can be answered:
Are the collision detection built-in interfaces of the engine such as UWorld::AsyncLineTraceByChannel based on CPU collision detection? Does the engine provide a similar GPU-based collision detection interface?
If the engine does not provide a ready-made GPU-based collision detection interface, is it possible for Niagara in UE5 to implement collision detection based on hardware ray tracing in UE4? If you want to understand some key points in this technology, where is there an introduction?
Collision is CPU only.
When it needs to talk back to blueprint stuff / actors etc.
All line traces etc from kismeth. Are based on CPU calculations.
I think you need to do what you are attempting to do differently.
For instace, and for starters, you may be able to pick apart the new chaos cloth collision code to get ideas on how it works.
Secondly, the math for collision/self collision of tris is probably better explained in a CPU Gems article than it is in code. Surely you can find something specific to your use case that has already been peer reviewed without having to scratch your head for a month or 2…
Third and last, maybe you don’t even need collisions?
Maybe all you need is to write a shader code that reacts to an array of locations and shifts things accordingly. Game design is 99% smoke and mirrors. Some times you can make things look way better by faking it. It really depends on what your end goal is (of which you havent explained much).
Forth - a theory really.
In theory yes, ray tracing gives you the current frame position of every tris, out of which you could possibly do some math.
However stopping a tris from moving in the next frame is a whole different concept altogether. Not really sure the current frame position would even matter to it, as you would need the next frame to determine if the tris collides and should therefore not change position…
Ps:
Async doesnt mean better or faster. In fact its kinda the opposite. It can lag behind and provide a result whenever it gets able to. (Withiut stopping the rest of the code from executing).
This would mean far lower accuracy in most if not all cases.
The higher the performance cost of things, the lower the poll rate of the async function would be.