I’m looking into this code to see if I can reuse the core of it for non-animation related value blending. Really struggling to work out what it’s doing though. I can see there’s a Delaunay triangulation going on in the preprocessing, but I don’t follow how it’s calculating the weights. I’m also kind of surprised to see that it apparently doesn’t limit itself to the closest samples - I would have thought the reason to restrict samples to a regular grid was to simplify the blending process in this way.
My experimentation has shown that the following case
not only gives a weight to the bottom right sample, it actually has a larger one than the bottom center sample (albeit both are very low). Maybe an effect of the sparse samples in my test, but seems a little strange.
Anyone have any insight?