I mean it will be significantly slower than doing it using the GPU with render targets. The reason is because on the CPU you need to iterate one cell at a time over all cells and due to the overhead of blueprints, each cell will have a fixed overhead. On the GPU using a texture, there is an implicit forloop happening for all pixels so you can consider you are instead only looking up a few neighbors at each step. So a 512x512 fluid sim only takes like 0.01-0.03ms or so which is crazy fast. You couldn’t even complete the 512x512 loop in blueprints without locking up your machine for 30 seconds and hitting the instruction limit (its 250k points after all). Even on the CPU without blueprints you would have to do a bunch of work to multithread your sim to get anywhere near gpu speeds.
I just tested 4k, 1d cells using my BP and it was noticeably chuggy (even when I made the camera look mostly offscreen to avoid the cost of 4k draw calls). With setting the cube heights on all 4k it ran at 20fps, and when I disabled the ‘set cube heights’ function to test just simulating the 4k points, the fps was still only 34. This is down from ~170+. But a few hundred points run no problem. I am still at 130fps with 256 points. But the point is this will cost whole milliseconds instead of tiny fractions of milliseconds.
Forces:
To add forces to the above, all you would need to do is make another function that adds some value to any number of cells. You could get random cells and add a random value and that is a great way to try out rain.
Doing this on the GPU is so much fun because you can actually use textures and things as your forces. In one of my tests I printed debug text into the material and the changing digits made really cool fluid ripples. Once the next engine comes out we should have some cool examples to show.