Download

Detecting which collision volume is intersected most

Hi!

I’m stuck at a problem at the moment, and I hope there might be someone who could help me find a solution and tell me if it is at all possible. Being still relatively new to blueprints (and UE4 in general), I honestly have no idea how to approach this.

The short story is that when an object on the map intersects multiple collision volumes, I want to detect which of the collision volumes the object intersects “the most”. I only want this detection to detect on the X-axis and Y-axis, not in the Z-axis.

I’ve added a picture example to the bottom to help describe the issue. The two squares represent two separate collision volumes, and the blue circle representing a map object that intersect both collision volumes. As you can see, the red volume covers approx. 25% of the object, whilst the green one covers a larger area of the object, about half of the object. In other words, in the picture below, I would want the blueprint to return to me that the green collision volume is the one that the object intersects the most.

cf7d450c49c9eee38adaa443f6187cd1da09054b.png

If there isn’t a “coverage” option for the volumes, this may be another alternative.

Your blue Obj has multiple sensors on it, and based on how many are “lit” will tell you which volume it is in.

Thanks for your answer Yggdrasil! :slight_smile:

“Coverage” is a much better word for what I was trying to say; return the collision volume with the highest coverage of the object.

I do realise that if no other true options exist, I may have to resort to some kind of workaround similar to the sensor method that you propose.

Assuming I have understood what you meant correctly, this does pose a potential problem. I will admit that it is something that would be a rare problem, and that it can probably be counteracted by increasing the density of sensors… although increasing the amount of sensors would likely be less than ideal from an optimisation perspective. Time for some more MSPaint. :stuck_out_tongue:

dbc322e5c88bdb2a9b16d972709b4a9837887475.png

As before, red and green are collision volumes, blue is in this case the collision for the object itself, whereas the black dots are sensor points on the object. In the example above (again, assuming I understood your suggestion correctly), it would return green as the largest collision volume (since it actually hits three sensor points), whereas red would not be deemed the largest intersecting volume because it managed to wedge itself between the sensor points without hitting any.

Again though, can probably be counteracted with more sensors, but which in turn isn’t the best in terms of performance/optimisation. Or am I completely misunderstanding?

Yes, that’s exactly what I meant. You even took it that much further by putting the red volume between the sensor points. Nice!

I don’t know much about it personally, but there is a Sweep option that seems to gather good results. I think it gathers data on the move instead of a one-hit wonder scenario.

Urh, the forums ate my last quick-reply, so I’ll try again.

I guess my mind is better suited to finding problems and pitfalls than seeing the actual solutions… although that has its uses sometimes. :smiley:

Using the sensor method isn’t ideal, but if I can’t find another solution I’ll use that as my fallback. Still better to have one option than none at all. Honestly, it hadn’t occured to me to use a sensor system befre you mentioned it Yggdrasil. Thank you for you help thus far, it is much appreciated.

I wasn’t aware of any Sweep option, so I’ll have to look into that sometime soon. For now I’ll wait and see if I stumble upon some better alternatives, and remain open to other suggestions as well. In the meanwhile I’ll try to figure out why the calculations I’m working on now are going completely bonkers… but that is a subject for a possible thread in the near future if I can’t get it right. :wink: