What makes Blueprint compile times longer?

I’m working on adding functionality to my game character (weapons, abilities, etc) and I’m noticing an increase in compile times every week. As of now it’s 30 seconds, and it was 10 seconds a while ago. Is there something specific that increases compile times that I should look out for? I can’t imagen why it would take so long, even if I only changed something really minor that didnt affect anything really.

Should I be splitting functionality into seperate blueprints? Components? So I can compile each one individually?

Putting everything into one single Blueprint will of course increase the compile Times.
It’s always better to modularize your blueprint into single components.

In your Project Settings under Editor / Blueprints turn off Use Compilation Manager might help a bit.

I can’t say for sure, but in 4.17 the compilation manager introduced compile time regressions for some blueprints. Details at the end of the comment, but they are pretty gory.

The good news is your compilation time should go down quite a bit in 4.18 and you can disable the compilation manager, as AceD suggested.

Here’s the story of the regression: When a blueprint (A) changes its public interface (e.g. the number of inputs in one of its functions changes) we need to recompile every blueprint that calls that function. We don’t track what blueprint calls which function, so we recompile every blueprint that has a dependency on blueprint A. The way this was implemented in the old compilation flow was we would hash a blueprint’s public interface, and when that hash changed value we would recompile dependencies.

This hash had to be computed at the end of the compilation process and so it was difficult to port this logic to the compilation manager. The hashed value would also sometimes change every time you compiled a blueprint, meaning that for some blueprints we would always recompile dependencies.

For the large blueprints I was testing against internally the hash was not determinstic (e.g. changed every time you clicked compile), so the compilation manager always represented a speed up in individual compilation. I did not even know, at the time, that the hash was not deterministic, I only discovered that after the compilation manager was released and we got reports of compile time regressions. Very unfortunate.

Anyway, as of 4.18 the compilation manager has an extra phase where it skips compilation of blueprints that are being recompiled as a dependency if none of their dependents had an interface change. The new implementation does not rely on a hashed value and so is easier to maintain and hopefully more reliable. It also avoids dependency recompilation when a new function is added (because nothing could care about the new function’s signature).

Thanks for the detailed explanation! Disabling the compilation manager has already helped for now, and hopefully the 4.18 release will also bring some good improvements.

Since we are talking about having one large blueprint vs. splitting up the code into multiple components:

I was watching the LiveStream from Zac the other day, in which he talks about blueprint communication. Somewhere there he mentions that a single blueprint should not really exceed around 100 nodes (otherwise go call a programmer - you’re doing it wrong!). OK, here are my questions/concerns with that:

I totally understand that compilation time increase with the number of nodes I throw into a blueprint class. However - if I decide to ignore compilation times, what will be the impact on performance of that blueprint class? Let’s take some completely random numbers for demonstration: I have a BluePrintClass with 500 nodes vs. a blueprint with 50 nodes and 9 components with each 50 nodes (so overall also 500 nodes). Will it make a difference performance-wise on runtime? I mean, is there some kind of overhead from blueprint communication from parent class to components that, at some point, can slow things down on runtime?

Thanks for any insights on that!

We have lots of blueprints internally that have more than 100 nodes. It’s niceto keep things as small as possible, but no smaller :slight_smile:

>Will it make a difference performance-wise on runtime?

No. The details of each blueprint (either the one very large blueprint or the many smaller blueprints) will determine performance. The only thing I can say is that if you have to Tick something it is probably better to tick one large thing instead of many small things.


I’m using 4.18.2 and when I compile my character BP, even after changing a constant value in any given node, I wait a good 10 seconds before it finishes. It’s a pretty big BP, but I would imagine it’s pretty standard size for a complex game. I’ve seen this happening with a much smaller BP as well.

Turning off compilation manager cuts it down to 1-2 seconds.

If the compilation manager makes things so much slower and disabling it doesn’t hurt anything, seems to me that whatever it’s doing qualifies more as a bug than a feature.


Disabling compilation manager gave me good results regarding compilation time, but there’s one caveat I found: after restarting the engine, there’s a chance for it to crash if I play the game (in editor) before recompiling the blueprints I’ve modified after disabling the compilation manager.

Not sure why or how–it simply happens.

I’d love any advice on this because having BPs taking up to 20 seconds to recompile thanks to the compilation manager is driving me insane.