The only reason to do short circuit evaluation is for performance.
For correctness / side effects, you have the white execution lines, which you already have control over.
If you add short-circuit evaluation, then you probably also should take advantage of the “pure” marker, to not re-evalute a “pure” function that has already been evaluated.
And you should not update the value that’s returned by a variable node when it’s re-used later in the graph.
This would allow you to treat the entire graph as lazily evaluated, and do aggressive evaluation graph transformations, that could actually lead to real performance benefits.
(See functional language optimization, like ML and Haskell, for how this can be effectively implemented.)
Now, the draw-back is that this would be a semantics-changing change. It would actually change the semantics to be better, in my opinion, but it would have a non-small impact on some existing scripts.
I think the benefits might actually outweigh that cost, if it’s pre-announced and handled in a good way (perhaps some command that could turn a blueprint graph into its older sibling by inserting the correct read/write/duplicate nodes.)
This is the most confusing part of blueprints to me, and it’s also the saddest, as it precludes so many possible optimizations:
I think it should print the same value twice, as the data is sourced from the same node, but it prints the value and then the incremented value.
I think fixing this would be worth it (and would give much better defined semantics to short-circuit evaluation!)