Is it possible to read pixel values from 32-bit EXR image?

I’m trying to read in pixel values from a 32-bit EXR using the following code:


.h:
UFUNCTION(BlueprintCallable)
static FColor TextureTest(UTexture2D * TextureInput, int32 PixelIndex);

.cpp:
FColor UBlueprintFunctionLibrary::TextureTest(UTexture2D * TextureInput, int32 PixelIndex)
{
	FColor PixelColor;
	PixelColor = FColor::FromHex("AAA");

	if (TextureInput)
	{
		FTexture2DMipMap * CurrentMipMap = &TextureInput->PlatformData->Mips[0];
		FByteBulkData * RawImageData = &CurrentMipMap->BulkData;
		FColor* FormatedImageData = static_cast<FColor*>(RawImageData->Lock(LOCK_READ_ONLY));

		PixelColor = FormatedImageData[PixelIndex];

		TextureInput->PlatformData->Mips[0].BulkData.Unlock();
	}
	return PixelColor;
}

It’s working in the sense that it’s returning a linear colour value, but the value is totally wrong. It seems to be some kind of strange conversion to 8-bit perhaps?

For example, the value at pixel 0 should be [1100, 976, 1088] but it’s giving me [125, 100, 170].

Is there some additional code I need to be adding to make sure I get correct (HDR) values here?

The compression settings on the texture are HDR (RGB, no sRGB).

Thanks!

So I realised one thing that would definitely make this fail, and that’s that FColor is only 8-bit anyway, and I need to be using FLinearColor. Also, FByteBulkData is 8-bit, so I need to be bringing in the BulkData as FUntypedBulkData (I believe). So here is the updated .cpp code:


FLinearColor UBlueprintFunctionLibrary::TextureTest(UTexture2D * TextureInput, int32 PixelIndex)
{
	FLinearColor PixelColor;
	PixelColor = FLinearColor(1270, 1270, 1270, 1270);

	if (TextureInput)
	{
		FTexture2DMipMap * CurrentMipMap = &TextureInput->PlatformData->Mips[0];
		FUntypedBulkData * RawImageData = &CurrentMipMap->BulkData;
		FLinearColor* FormatedImageData = static_cast<FLinearColor*>(RawImageData->Lock(LOCK_READ_ONLY));

		PixelColor = FormatedImageData[PixelIndex];

		TextureInput->PlatformData->Mips[0].BulkData.Unlock();
	}
	return PixelColor;
}

However, I’m now getting really weird values out of this. The output for pixel 0 is now:

[19145850309830042271612928.0, 0.007836, 19145846851065528451072000.0]

Which doesn’t even make sense…

Any help?!

Does anyone know if this is possible? I feel like the correct pixel values must be lurking there in the bulk data, but how to get them out?!?

1 Like

I am actually facing the same porblem, any news?

For any future people running into the same issue:
I was able to get it working by using FFloat16Color:

FTexture2DMipMap& Mip = Texture->PlatformData->Mips[0];
const FFloat16Color* FormatedImageData = static_cast<const FFloat16Color*>( Mip.BulkData.LockReadOnly());
const FFloat16Color Color = FormatedImageData[(Y * Texture->GetSizeX() + X)];

Texture settings:

  • Mip Gen Settings: NoMipmaps
  • Never Stream: true
  • Compression Settings: HDR (RGBA16F, no sRGB)
  • Lossy Compression Amount: No lossy compression
  • sRGB: false
  • Mip Load Options: true
  • Virtual Texture Streaming: false
    Note that probably not all of these settings are required, I didn’t test each one separately, the streaming related settings are likely only necessary when doing this in a separate thread.