Blueprint function vs macro performance difference

I’ve read “Functions vs. Macros” paragraph of “Blueprint Best Practices” doc here:
It states very straight that “In general, a good rule of thumb might be to use a macro if there’s a quick bit of functionality that you want to reuse everywhere.”.
So generally :just use macroses whenever possible".
But for me it’s still unclear, what’s the difference in performance (CPU/Memory) between those two things in blueprints. Especially if one uses them A LOT.
There’s guy on reddit who states that he encountered 75% reduction in memory usage after changing macroses to functions:
So is it true that covering nodes into blueprint function over macros helps to reduce memory usage?
I think it should clarified in “Blueprint Best Practices” document for sure, but personally I just need to clarify if there’s such difference under the hood or not.

Hey Alexulter! Thanks for hanging in there while I asked around about your feedback. As far as a reduction in memory usage, I wasn’t able to find any benchmark data to support that claim. Memory usage can be influenced by a lot of things, including what machine you’re using and what version of the BP VM you use, so your results may vary. I’ll continue to ask around internally and see what I can find.

Macros will inline their graph into the one where you use them. That means if you use them in the event graph, you are adding all local variables used in the macro to the persistent frame (including arrays and anything else big).