Ribbon renderer data corruption on AMD GPUs

Hey, Stu!

Welcome back.

Right. Other than verifying that AMD GPUs were happy (and they were, no RHI/D3D errors or stalls), I didn’t measure performance in too much detail because I was expecting it to be very similar, albeit with some slightly larger buffers. Those I measured, and in our case, the “waste” would go from 6% to 25% of the buffer’s size (worst case) in a NS with about 6 ribbon renderers with a full range of different features. Mind you, we’re talking about a buffer of 75 ribbons, which amounts to roughly 2400 bytes (total buffer size), so quite small when put in perspective.

The fitted buffer idea is a very good one too, if we also want to optimize storage, but I think we’d have to do this more globally to reap the best benefits. At this point a GPU centric system like you describe becomes the better option, but of course it means larger changes.

Thank you both for the ideas and discussion!

Dan