Do you mean while editing the graph? How complex are we talking about? 250+ nodes, 1000 nodes?
If you experience it intermittently, consider disabling the GeForce experience overlay.
I have a fairly good PC at the moment but was wondering what specific parts makes the editor run faster? Like is it RAM, CPU, GPU or general memory speed. The editor is not running slow for me but at moments it drops down to 10 fps in large blueprints.
I am running:
- Core i7 6700k 4ghz
- 61.6gb usable RAM
- Nvidia 1080
- NVMe Samsung SSD 950
- And 2 SCSI samsung portable 1tb drives.
Most of my classes have collapsed graphs in them, and each graph has from 20 - 2,000 nodes.
In that case, I’m pretty sure that compilation takes several seconds as well. This is a topic that has been discussed in the forums over the years, more than once. Cramming a lot of nodes into the same graph does make a hit on the overall responsiveness of the user interface. (I might be mistaken here but I think there is actually a hard-coded limit to the amount of nodes a single blueprints can utilise at a time, too - don’t quote me on that, though!)
The behaviour is quite noticeable with collapsed graphs but not so much when you wrap the code functions.
As far as I know there is no magic button to improve the performance. The consensus was to split the code between separate actors and/or create pure reusable functions.
I’ve got a world generator blueprint that needs several helper data-only actors that fetch and preprocess data from DataTables (I should have made them components but now can’t be bothered). Having it all in the same blueprint had made the whole thing pretty much unusable. After the split I barely notice it and my PC is quite a bit weaker than yours.