Why Blueprint performance is so much higher than my C++ Script

So I am just getting started on my Unreal journey and found a great tutorial for building a Blueprint that generates a grid using a procedural mesh and in the editor allows you to update rows, columns, tile size and line color/opacity. In engine and at runtime the performance even for extremely large grid creation (50,000 x 50,000) is very smooth.

As I want to be able to leverage both Blueprint and C++ where appropriate I thought it would be a good exercise to try and replicate this as a C++ script since I could directly compare them. And while I was able to recreate the functionality it only seems performant until grids of about 200x200. I’m sure there are some things I’m not taking in to account that I need to manage better in C++, but after playing around with several different things I seem to have a hit a wall. Wondering if someone more knowledgeable can point me in the right direction.

I’m attaching the C++ script and header and a screenshot of the core part of the blueprint. Please let me know if you need any additional info, and thanks in advance for any advice.

GridDesigner.cpp (4.5 KB)
GridDesigner.h (2.5 KB)

There is no code in those files

What is this for? There are many ways you can generate a grid in just a few lines. You can use a material to do so for general use.

Ok not sure what happened to the files on the first attempt, but they should be fixed now.

Ultimately this is purely for my own education, but the blueprint version of this grid allows you to lay down a grid of nearly any size with great performance. Then on top of that you build the selector for the grid for movement or object placement etc.

I am still very new Unreal and trying to understand how Blueprints relate to C++ in terms of performance and uses. I was able to mimic the blueprint in functionality but not in performance and I’m just curious why? Even though I’m using the same procedural mesh component and calculating vertices the same, why is the blueprint so much more performant.

For example if I don’t use UPROPERTY() it allows for things to be garbage collected at any time, but if I do use it then I don’t fully understand when the engine does it, especially as it relates to the onconstruction event. I’m not looking to solve a problem exactly as much as I am looking to understand the mechanics at play in this example if that makes sense.

I’m interested. Not everything is well documented and I’m surprised that c++ would be less performant so there is definitely something going on.
I don’t see anything especially odd in the code itself but why are you not marking the object pointers as UPROPERTY()? Could you try marking all properties of the UCLASS as UPROPERTY and do a rerun on the performance test?

Just a note, I see you are forward declaring UProceduralMeshComponent in the header, so you don’t have to include “ProceduralMeshComponent.h” in the header, you should put that in the .cpp file :). You can forward declare all your classes like that which saves compile time.

What tutorial did you use?

I started with everything setup as a UProperty() and still had the performance gap.

I use Rider and if you take away the UProperty it indicates that the object can be garbage collected at any time so I was thinking that may be better for an object updating so often, but ultimately didn’t change anything.

Yeah, thats odd. You could add some quick logging in to see where it’s taking time:

FDateTime startTime = FDateTime::UtcNow();
... do your task
FTimespan difference = FDateTime::UtcNow() - startTime;
UE_LOG(LogTemp, Log, TEXT("Step 1 took %f seconds."),  difference.GetTotalSeconds());

you may need
#include "TimerManager.h"

That’s a good idea. I’ll wrap the various steps and take a look.

Well, correct me if I’m wrong, but I think that UObject properties on a class should always be marked UPROPERTY to be properly garbage collected at all and at the right time (not when it is still required by the UCLASS).

How I love not having the documentation nicely together on an offical page :).
But here are some references:

Ok so I set everything back to use UPROPERTY() like when I started and performance stayed the same. I also added in the logging and it looks like the code itself is running incredibly efficiently but I did notice something in the log I didn’t before (see image) I do not get this error when resizing or moving the blueprint version but do get it when running my C++ script. Based on the timing I am thinking whatever its doing on the “LogNavigationDirtyArea” is where all the delay is coming from and the rest is actually running correctly. Anyone know what this means (I’ve started googling but not found much so far)?

These timings were when creating a 1000x1000 grid and it only took 0.002 seconds total in the script which seems fine, but it still “lagged” hard when I changed the settings so I’m pretty sure its that other piece.

Unsure, I have not worked with procedural meshes.
Something tells me it should be set up to not affect the navigation system, or be given a bounds manually, see if this is possible. If you know the grid is going to be Row x Column X TileSize you can perhaps set the bounds from the start

Assuming you don’t want navigation built considering these procedural meshes try:

SelectionProceduralMesh->SetCanEverAffectNavigation(false);

or change the setting in your derived BP and see if it helps.

Okay so I am completely at a loss at this point. I have cleaned up all warnings/errors from the logs and am only getting the timing data, but there is still a very noticeable difference between the C++ and the Blueprint, including if you make a BP from the C++ class.

If I create a grid that is 1000x20 and then only change the Y to 1000 so that its a 1000x1000 grid the C++ class or derived blueprint from the C++ class takes 6-7 seconds before displaying the logs in the image below. The Blueprint I built directly doing the exact same update is effectively instantaneous.

I even went and looked at the properties of my procedural mesh between the blueprint and c++ and everything there was the same as well.

If anyone knows why this is happening I would love to know, but as I am just learning I don’t think I am going to chase this any further down the rabbit hole for now.

image

A quick check: you could try to Break All in visual studio while its hanging and see if you can catch it in the act of what’s taking so long.

Otherwise, Unreal has some pretty robust profiling tools that could narrow this down, but that’s a whole rabbit hole you may not want to visit yet.

2 Likes

So I don’t think I made any material changes to my header and script, but I am now getting the performance I was expecting so I am going to mark this resolved even though the exact root cause is still undermined. I’m going to attach my updated script and header in case anyone is interested in it. I plan to continue building out functionality for the grid designer so I may end up figuring it out as I go further.

GridDesigner.cpp (5.0 KB)
GridDesigner.h (2.6 KB)

1 Like