I get there are implementation differences. However from learning and using them, I’m often copy/pasting between them when I need to switch.
It would be nice if a Blueprint was just a Blueprint, and how it was stored/executed in the engine was handled automatically.
Why wouldn’t a function that could be “pure const” not always be “pure const” - unless overridden by the author?
Why would a function not automatically be converted to a macro when a delay node was added? If needs must, how about a warning message about performance before changing it to a macro?
It took a bit to warm up to Blueprints and I see how powerful they are for authoring content. It seems like this detail could be hidden from authors for a better experience.
Besides a lot of obvious differences between the 2 systems, a less-obvious but totally breaking limitation I can see is that the variable output for macros and functions differ: macros “return” references, while functions return values. In some cases, it does not matter, however, in many cases you choose one over the other specifically because of this side effect. With a unified system, you’d have no control over it.
Such feature shouldn’t be ever considered by the C++ programmer as it would promote bad programming practices. Below I’ll try to explain why.
Blueprints already allow for doing bad things without ever letting you know that you’re doing something bad. You’re executing things that look innocent and ending with issues you’re not even aware of. Until you realize the terrible performance (framerate, memory, waste of development time, badly designed systems). This OK, blueprints are awesome for non-programmers and can be efficient, if… you know what you’re doing
Functions and macros are different beings. AFAIK content of macro is actually copy-pasted into blueprint using it, making the graph bigger and longer to process. (please correct me, if I’m wrong on this). Functions in blueprints are never embedded into another graph.
Functions and macros are different things and auto-converting anything from function to macros could cause bugs that you didn’t experience with function.
Last but not least. Delay node is pure evil, there are used far too often making blueprint code quite difficult to read. And cannot be canceled. Actions separated by time should be often also separated functions/events. Otherwise, code gets really messy - it can cause bugs that are difficult to track and fix (requiring redesigning messy blueprints). That’s especially bad if the given blueprint is part of some system or feature. Blueprint A would still cause event although Blueprint B already canceled the action…
I understand the convenience of using Delay nodes, but this node should be avoided. You could do an entire project without it.
Replace “cosmetic scripting” i.e. blinking lights with Timeline component, or much better Sequencer-based animation.
So… proposed feature would promote bad programming practices: overusing Delays and blueprint macros. Non-technical people would simply assume this a good way of doing things.
Thanks everybody for posting. @ClavosTech yes, what I’m talking about is the user experience, and the fact that the differences that are exposed are complex, opaque, and difficult to change after the fact - even though copy/paste is a solution. It would increase the utility of Blueprints.
The problem is to use the functions and macros for gameplay, I had the same problem in the past and my self of the past could understand you.
My present self tells you to forget the nonsense of doing functions and macros so that the code is cleaner.
Use functions to return calculations if you can not put it pure should not be a function.
Use macros for the flow of variables or to get different lines of execution based on questions.
Use events for the gameplay, where you can use delays, timelines, dispachers …
Local variables (unless you want to fill your BP with temporary variables),
access from other BPs with return,
Events can be called and continued on with the execution without finishing its logic first but Functions are guaranteed to finish before continuing,
if you have a compilation heavy logic which is used multiple times in BP, using functions will only compile it once (every instance of reused macros will get compiled increasing compilation time)
Also in case you don’t know, pure functions will get executed for every node it is connected to. Lets say I have a function with 5 return pins, each connected to a different node, its logic will get executed 5 times in single frame. So you shouldn’t use pure function if its return is connected to multiple nodes and is heavy to execute.
There are probably other advantages of functions which I’m forgetting or simply don’t know.
Nothing “rude” about that. Events are just a way of scripting specific to visual scripting tools like blueprints. The default of scripting anything is a function. Events and macros generally should be used only if there’s you can do with function, i.e. reacting on ComponentHit event - which could also call function.
Technically speaking - and that’s what matters most - functions are the cleanest way of scripting. Any kind of “subscribing until event finish” is a redundant and more complicated version of simply waiting on function’s return - which does work always, doesn’t matter if you’d place Return node in your blueprint or not.
And finally, 4.24 “fixed” the fundamental things about functions: specifying the level of access. Now every function can be:
Public: available to call from any other class/blueprint
Protected: only available to call from the class which defined that function + all child classes/blueprints
Private: only available to call from the class which defined that function, inner logic of class that should be never accessed outside
The advantages of this should be clear, you can control access to the methods. It’s bread and butter of the Object Oriented Programming. This should work in blueprints since forever, as this is a fundamental thing…