Please implement Short-Circuit evaluation for Blueprint

Hi

Okay, but I still believe in the Blueprint Okudagram power.
Very few, almost none blueprint nodes and more small and intuitives boxes full of cool features.

I was seeing the Strategy Game method. Seems unreal 4 was made for C ++. In content browser there were very few BP programming elements, almost none, only a beautiful game Art. All very well polished. And the program seemed a very simple for a full game even for those who do not understand C++ like me.

Sorry, I’m venting, I have spent the whole night trying something realy very simple without any success in blueprint, so I’m still a little frustrated with the blueprint logic, but will not give up, I’m a Unreal Engine 4 Documentation fan. And still I have to texturing, rigging and animate two characters yet. Not mentioning the level design.

Match 3 is a good example of a Blueprint & C++ Mix. I tend to go down the Shooter / vehicle game methods, write all my logic in C++ and plug in default properties with sub-classed blueprints.

People do tend to forget that being able to make full games with Blueprint was a happy accident, it was always primarily an artist tool.

Haha :cool:

Thanks very much , but I think I’m not much of a prophet, Epic already announced quite a while ago that they work on a BP->C++ converter, so I don’t predict anything, I just repeat what Epic said :slight_smile: I think it’s very³ experimental in 4.11 and should be usable in 4.12.

I am absolutely not saying that BP is a complete waste of time. Even with the BP->C++ converter, you will still write BP. It’s just that once you package your project, the engine automatically converts your BP code into C++ code, so you no longer have the worse performance from Blueprints (like 10 times slower than C++).

So you should still learn Blueprint :cool:

Is wonderful . But it does not make much sense to me
Anyway everything will become C++. At the end Blueprint was only a bridge!

Why Epic does not give a step further and transforms the WH****OLE BLUE****PRINT NOMENCLATURE to C++, Including the such short-circuit how the OP have mentioned in your posts. It could then be something bidirectional like BP_C++ => <= C++. Or only be a Fully Visual C++ very similar with the friendly [FONT=Arial Black]Blueprint. And nicknamed [FONT=Arial Black]B[SIZE=2]LUEPRINT[/SIZE]++

I’m intentionally bumping this up in interest again , in hope that SOMEBODY would see this important enough to be in the roadmap.

I just wasted some time chasing a crash that was caused because of crashed caused by a C++ function, all because this function shouldn’t be run if a prior condition at the top is false.
This is really bad.

I believe the BP to C++ conversion would not change anything. I don’t think Blueprint’s AND and C++'s “AND” runs the same way ( obviously form the lack of SC).
I haven’t tried the nativize BP thing recently but If i remember correctly, all it does is to re-write all the BP function in C++ using K2 nodes ( which it he Blueprint node anyway ) .

I could have sworn i submitted this as a pull request. Maybe i didn’t for a reason i don’t remember anymore. But this should theoretically fix/add the SC

Definitely an interesting reaction to a pretty standard part of C/C++ and C# (and probably many others) - I would hope that BP would end up working this way eventually!

@ - I tend to do use this in C++ when I want an elegant way to only perform expensive but rare checks only when needed. If you’ve got two conditions, where one is unlikely to to be true but is cheap, and the other is likely to be true but expensive, you can put the cheap one first, which fails most of the time, and the AND will almost never try the expensive check. I love it :slight_smile:

+1

+1 for Short circuit evaluation

+1 on this feature! I am just learning BP, but I come from a background in other languages, and I was pretty shocked that this basic optimization was missing.

Where would you implement to full code btw? does it interfered with the Blueprint Ubergraph compiler?
Please Re-submit. Am sure looking forward for it.

I upvote the shortcircuit evaluation, I think this is a basic feature of most of the well known programming language. Besides, I’ll add that the select node should work the same way.

Here is my interpretation of the select node in C++ (I don’t know if it is the most intuitive) :
[FONT=Lucida Console]
[FONT=lucida console]// Supposing this is, into blueprint, a member variable[FONT=lucida console]
TArray<float> MyArray;

// The nodes chain for select A/B
float funcA() { return 0.0f; } // getting zero.
float funcB() { MyArray[0]; } // getting the first element.

// My representation of the select node
float SelectFloat( bool pickA )
{
if (pick A)
return funcA;
else
return funcB;
}

// Now if I do this, I systematically get warnings
float MyFloat = SelectFloat(MyArray.IsEmpty());

All of this code can be done with pure only blueprint node, so it does not seem obvious to me how redesign my blueprint code for not having the warning.

Upvote this, it is a pity blueprint does not have this feature, can simplify and speedup development a lot.
Wanted to see a Blueprint a good rich functional language with lazy evaluation of nodes, but looks like it is not. I am disappointed.

Yes I agree if you are an artist you have no idea of such feature and you don;t care, but if a programmer start to use Blueprint - everything is very unfamiliar. And hope it is better to teach nonprogrammers of good practices, not of bad one. I don’t know any good programming language without that feature.

The only reason to do short circuit evaluation is for performance.
For correctness / side effects, you have the white execution lines, which you already have control over.

If you add short-circuit evaluation, then you probably also should take advantage of the “pure” marker, to not re-evalute a “pure” function that has already been evaluated.
And you should not update the value that’s returned by a variable node when it’s re-used later in the graph.
This would allow you to treat the entire graph as lazily evaluated, and do aggressive evaluation graph transformations, that could actually lead to real performance benefits.
(See functional language optimization, like ML and Haskell, for how this can be effectively implemented.)

Now, the draw-back is that this would be a semantics-changing change. It would actually change the semantics to be better, in my opinion, but it would have a non-small impact on some existing scripts.
I think the benefits might actually outweigh that cost, if it’s pre-announced and handled in a good way (perhaps some command that could turn a blueprint graph into its older sibling by inserting the correct read/write/duplicate nodes.)

This is the most confusing part of blueprints to me, and it’s also the saddest, as it precludes so many possible optimizations:

I think it should print the same value twice, as the data is sourced from the same node, but it prints the value and then the incremented value.
I think fixing this would be worth it (and would give much better defined semantics to short-circuit evaluation!)

My thinking may be affected by having got used to how blueprints work, but first and foremost I’m a programmer, and I find the idea of that graph printing the initial value twice extremely counter-intuitive.

You seem to essentially be saying that all nodes be executable nodes under the hood, just that the ‘pure’ ones be implicitly plugged into the execution chain at the point that one of their outputs is first used? What about when a getter node was used from disjoint (asynchronous) execution paths in an event graph - when does it get reevaluated? Or behaviour changing as a result of cutting a value wire in a different section of the graph? I think this would introduce far more issues than it solved.

I agree there are issues in BP with unexpected reevaluations of pure nodes that can lead to performance concerns, but personally I don’t think BP performance should be a top priority. It was never intended as a full substitute for C++ anyway.

For me a bigger BP issue is the fact that node pins in event graphs behave like class scope variables. This leads to at least two issues - 1. recursion in event graphs is a no-no; 2. a blueprint will (generally without user intention) retain a strong reference to a UObject that at any point passed through a pin on its event graph, until either the pin value is overwritten, or the blueprint instance is destroyed. The latter had me stumped for a fair while, and could in some cases cause a significant memory/resource problem.

Hey there!

I’ve looked over the thread here and I can see where there are opinions of varying amounts of for and against. I went ahead and put this discussion into a report to be further examined. No promises one way or the other on how it will turn out, but please keep up the feedback and reasoning, as it really helps our teams see what you want improved and how.

Thanks!

Hello, , first big thanks for hosting so cool and useful Twitch life streams. They are very enjoyable to watch.

For me some simple rules would be enough.

  1. Shortcircuiting is a must for selecting which pure node should be run.
  2. When execution flow goes to impure next node it gathers all connected needed pure nodes through it inputs and execute them if needed (that why shortcircuiting is a must feature, it can be used to cutoff unneeded pure nodes on that step) - (not: that is called lazy evaluation, many functional languages have it. And Blueprint in general look like one of them)

All other optimizations are not so important. But maybe some pure node result caching can be implemented - still here is a problem, pure nodes values can be affected by other impure nodes executed before (like in jwatte example), and even Pure nodes can affect state of other pure nodes so caching optimizations here are quite impossible without further complicating the blueprint language. And that why second rule is important - pure nodes should be calculated just before they are used, otherwise some side effects of previous impure nodes can be lost (but actually I have not met that situation yet, that mean it work that way now, but I don’t have habit to modify objects in pure nodes, and actually it look like that some pure nodes are calculated long before they are used)

The idea here is that Pure nodes are not only property getters, they even can access some unavailable components (if some flags are not set, and select node was used) or even can make some (relatively) complex computations. Shortcircuiting is a very cheap and natural concept to understand, I think many people think it works by default without knowing it actually does not work. Select node semantics just tell that you are selecting which pure node you are accessing, accessing them all and than getting appropriate value is not what some can expect.

So that code will print only “false” string, once, now it prints “false” and “true”


And that code should print only “Option2Log” string, not all “Option0Log”, “Option1Log”, “Option2Log”


I disagree with you in this, while yes, in one case some form of wiring can be used as optimization, in your example it is quite clear that value have changed between two prints, and it is expected to print different values.
Also shortcircuiting can be used not only for optimization, but also for evading unneeded side effects. (like log printing, casting, accessing unavailable objects)

On the other hand reroute nodes can be used as a hint for the need of caching the pure value (actually like that is done with impure node results)

Like here Position can be cached between two Use Position calls, on the other hand if we don’t need caching we can call that node twice.

And I would like to see such optimization. But it can break a lot of existing code. And the whole caching logic can be unclear for nonprogrammers or blueprint firstcommers.

PS. Also there is some cryptic Const flag on function nodes. And I don’t know why is it different from Pure and what for is it in general. (In functional languages “pure” means that function is cons in some way and does not have any side effects - like changing states of any object. In blueprint Pure is only the way node is represented)

If pure nodes are re-evaluated on access, what then is the semantics of “pure”?
Does “pure” then mean “ha ha just kidding, re-evaluated at some points in the control flow that are not obvious!”
So, in essence, the only thing “pure” actually does is remove some implicit control flow (white) wires, at the expense of then not being obvious about when those (implicit) wires actually get there?
If so, the word “pure” should probably be changed to “implicit” …

^“Pure” actually means it does not change any variables.

I myself am for short-circuit implementation in Blueprints.

I’ve always wanted this behavior. For my Shooter asset, there are tons of cases where the performance gain is real.

I ended up making a set of really ugly macros called XAndSeq, so I can easily ‘and sequence’ or ‘ghetto short circuit’ conditionals. That way I don’t have branches everywhere.

&stc=1