Native Integer Types behave differently in Blueprint

That how blueprint reacts to unsupported type (Blueprint system don’t know how to handle type), there other int types which does not work (like int8). It’s a known “bug”

Hello!

I’ve just observed that native integers of different types are treated differently in blueprint, which doesn’t seem right at all.

Consider the following native declaration as part of a Blueprintable class:
protected:
UPROPERTY(EditDefaultsOnly, BlueprintReadWrite, Category=Int)
int32 SignedInt;

	UPROPERTY(EditDefaultsOnly, BlueprintReadWrite, Category=Int)
	uint32 UnsignedInt;

Within the editor’s defaults panel, the signed integer’s value is editable as expected. The second unsigned integer is not editable and is greyed out.

Both types are listed as valid in the documentation:

Oddly enough, using a uint32 as a boolean works perfectly fine…

it is definitely a bug that Epic should work out

specifically int8 and uint32 should really not be greyed out!

Hi ambershee,

Sorry for not responding to this sooner. It managed to slip past me.

The current variable types available to use in Blueprints are essentially a holdover from UE3 and Kismet. The original intent was to keep things as simple as possible for Kismet users (and later Blueprint users). The documentation page that you linked lists a number of core data types, and all of these are usable in C++. However, Blueprint variable types are a bit more restrictive (for example, you can use float in Blueprints, but you cannot use double). The documentation page you were looking at could probably use some clarification in that regard.

There has been some discussion internally about allowing Blueprints to use the largest numerical data type available (for example, int64 and double), but this would involve some significant changes to the source code since the Blueprint system was not originally designed to use those types. It seems like most of our developers are leaning towards wanting to use the larger data types, but it does not sound like they want to support all types (for example, make double available instead of float, but not both).

The ultimate goal is that Blueprints should “just work,” and simplifying data types available for use in Blueprints is what is intended. This issue is actually more of a design restriction than a bug, and while we would like to eventually allow Blueprints to use the largest numerical types available, you most likely will never be able to use all of them.

Thanks ,

Personally, I’m not so keen on the larger data types, as a lot of my reservations come from a networking and memory use perspective. I’d rather be efficient than both at the moment, and don’t need the flexibility a 64-bit numerical type would offer.

For the time being, I’m doing the majority of my work in C++ for networking reasons. Blueprint seems to be a bit lacking in that regard, as whilst it works and isn’t too bad, it is certainly not as flexible as handling data member replication manually in code :slight_smile:

Either way, it’s one thing to be able to create these types in Blueprints, and another to be able to reference and use them.

I really wished 64-bit to be supported by Unreal Engine in Blueprint, because Blueprint revolutionize programming paradigm.