UTexture2D C++ sample at x,y coordinates returning incorrect RGBA values

I’ve been experimenting with reading/writing pixel values from a UTexture2D recently, and while I’ve got my code spitting out values, they aren’t correct.

`
FTexture2DMipMap* MipMap = &Texture->PlatformData->Mips[0];
FByteBulkData* ImageData = &MipMap->BulkData;
FColor* RawImageData = (FColor*)ImageData->Lock(LOCK_READ_WRITE);

FColor PixelColor = RawImageData[(Y * Width) + X];

ImageData->Unlock();

return PixelColor;`

Here is what the function outputs in UE5:
image
It should be RGBA(.75,0,0,0)

This is what was used to generate the texture:

I have also exported the texture and viewed the pixel value in PS, and it matches the original baking material.

Any ideas?

Try this by making a bp lib:

header

	UFUNCTION(BlueprintCallable, meta = (DisplayName = "Get Color at coords", Keywords = "ColorTools sample test testing"), Category = "ColorToolsTesting")
	static FColor ProcessTexture(UTexture2D* InTexture, int32 PixelX, int32 PixelY);

cpp


FColor UColorToolsBPLibrary::ProcessTexture(UTexture2D* InTexture, int32 PixelX, int32 PixelY)
{
	//GEngine->AddOnScreenDebugMessage(-1, 3, FColor::Orange, FString::FromInt(PixelX));


	FTexture2DMipMap* MyMipMap = &InTexture->PlatformData->Mips[0];
	FByteBulkData* RawImageData = &MyMipMap->BulkData;
	FColor* FormatedImageData = static_cast<FColor*>(InTexture->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_ONLY));
	
	uint32 TextureWidth = MyMipMap->SizeX, TextureHeight = MyMipMap->SizeY;
	FColor PixelColor;

	if (PixelX >= 0 && (uint32)PixelX < TextureWidth && PixelY >= 0 && (uint32)PixelY < TextureHeight)
	{
		PixelColor = FormatedImageData[PixelY * TextureWidth + PixelX];
	}
		
	RawImageData->Unlock();

	return PixelColor;
}


Still returns a false value,
image

Could the issue be with the texture format somehow? Its got NoMipMaps, VectorDisplacementMap, & SRGB = False

What are your texture settings?

Compression settings: UserInterface2D RGBA, Texture group UI?
I have SRGB to false and i get correct values.

is your output color set as a Linear Color?

I’m not a 100% sure how it treats Emissive Color. It might not register normally as it’s an effect (probably has it’s own buffer)

I’ve made a whole new texture in PS with red @ .75 again.
Compression: UserInterface2D(RGBA)
Texture Group: UI
SRGB = false

Still gives the exact result as the old texture .52
Im unsure what you mean by “output color set as a Linear Color” If you are talking about the lib function it is returning type FColor

When sampling through a material at a pixel and using debugscalarvalue it uses linear color as its sampler type and does return the correct value, if thats what you’re asking, but thats in a shader and the trouble is still with c++.

How are you generating the texture?

I get the correct results


UTexture2D* UColorToolsBPLibrary::CreateTexture(UTextureRenderTarget2D* TextureRenderTarget) {

	// Creates Texture2D to store TextureRenderTarget content
	UTexture2D* Texture = UTexture2D::CreateTransient(TextureRenderTarget->SizeX, TextureRenderTarget->SizeY, PF_B8G8R8A8);
#if WITH_EDITORONLY_DATA
	Texture->MipGenSettings = TMGS_NoMipmaps;
#endif
	Texture->SRGB = TextureRenderTarget->SRGB;

	// Read the pixels from the RenderTarget and store them in a FColor array
	TArray<FColor> SurfData;
	FRenderTarget* RenderTarget = TextureRenderTarget->GameThread_GetRenderTargetResource();
	RenderTarget->ReadPixels(SurfData);

	// Lock and copies the data between the textures
	void* TextureData = Texture->GetPlatformData()->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
	const int32 TextureDataSize = SurfData.Num() * 4;
	FMemory::Memcpy(TextureData, SurfData.GetData(), TextureDataSize);
	Texture->GetPlatformData()->Mips[0].BulkData.Unlock();

	// Apply Texture changes to GPU memory
	Texture->UpdateResource();
	return Texture;
}

res2

I seem to have found the problem,

image

When casting from FColor to Linear color it seems to screw something up and return a red of .52, but when getting the byte value right from FColor on the node, it returns the original 191/255

You need to convert to a linear color. The outputs are not RGBA Floats but bytes

Full color node => Linear Color
Notice the extra node in my example.

Not sure whats happening here,
image

The byte still gives the correct value 191/255, but when converting to linear color it’s still .52


This is my byte and rgb display for your material

Try using my code to convert the render target to a texture. Perhaps your conversion process has errors?

If its reading .75, shouldn’t the byte value be 191?

191/255 = 74.9%

225/255 = 99.23%

It doesn’t look to me like yours is returning correctly either.

Am I missing something about what the difference between color & linear color is? The entire spectrum appears to be shifted down a bit, like it’s being multiplied by a curve sinking function

Edit: The casting node appears to apply sRGB but doesn’t tell you. The inverse cast (LC → C) has a toggle-able boolean for weather or not it applies.
Curious to see your thoughts on this.

Color → Linear Color
image

Linear Color → Color
image

Bytes aren’t even in the decimal system. You’d need to calculate them and fractions in binary is a whole can of worms that I haven’t calculated since college. :stuck_out_tongue:
It’s too late in the night for that type of torture.
With my example you get the same values in as your material.

In by conversion code the SRGB flag is copied over but I set it to true for good measures.

I believe the bytes return node there is the integer representation of the uint8, which means you can divide it by the maximum of a uint8 (256-1) to get the float representation.

I made a little tester to look between the values:
(First image was labeled backwards, sorry for the confusion)
image

I’m getting only a .01 margin of error no matter the color combination. (trailing part of fraction should probably be rounded up)

image
From one of your previous posts, the bytes being turned into floats are being effected by sRGB

If it was a pure sample, the byte would read 191, not 225

In the render texture I’m getting these values, sampled from a direct screenshot from the material browser and cropped.

225,0,0

Also take into account that Unreal’s default color workspace IS in sRGB. So any material it generates is in that color space, unless you choose otherwise in the editor preferences.

As bytes, yes, but use the node to convert them to floats and it will apply sRGB without your consent. (If you are going from FColor → FLinearColor) and mess up the final product

See?
image

The cast converts 225 into a float incorrectly, and spits out ~.75, but that isn’t a pure cast.
225/255 = .88

But when you cast back, it gives you the option to apply sRGB
image

Here is the cast with/without sRGB (Our diverging values)
image

Yet you are converting an unreal material that is not in the linear color space by default. You would need to convert it’s colors before sampling it.

The material is irrelevant, I’ve also done with with imported textures
image
as shown above.

And alongside the c++ sampling I made a material pixel sampler here,


which also samples the correct (non sRGB) float value