How to read RGBA from any texture?

the “UpdateTexture” was gnawing away in the back of my mind - you’re InTexture is a UTextureRenderTarget2D rather than a UTexture2D:

	TArray<FColor> data;
	FReadSurfaceDataFlags readFlags(RCM_UNorm); // between 0 and 1 RCM_SNorm is -1 - 1
	FTextureRenderTarget2DResource* rtrgtResource=(FTextureRenderTarget2DResource*)rtrgt->GetResource();
	rtrgtResource->ReadPixels(data,readFlags);

No, I’m not using a render target anymore. I took that out when I made the changes. It’s a plain UTexture2D.

The UpdateTexture() was there from when I was trying to copy the texture to a transient uncompressed texture, but I was never able to get that to work.

Can you share the whole section of code including the creation and any manipulation of InTexture?

There is zero manipulation of InTexture. They are assets that are put into a list and I iterate over them reading them and putting them into another texture.

This all works fine if I render to a texture and read from the render texture using ReadPixels().

I’ve already posted the read code above. The rest is in blueprints and it works fine because when I swap out my read function with rendering to texture and ReadPixels(), it works fine.

I think I see - your code is assuming that the data is always stored as FColor but that’s not the case - other formats will be linear color, or floats etc.

Here’s my blueprint

ReadTexture is the code I have above with LockMipReadOnly(0);
I’ve used LockMip(0) as well. Same result.

If I remove the Read Texture node and render to texture and do ReadPixels, that works. It uses the exact same input and output textures.

All my textures are color, not linear color and not floats. Normals use a different compression, but strangely those work in both methods.

I don’t know - your output image here looks like it’s addressing something that’s 4 times smaller (larger???) which would be the difference between an FColor and an FLinearColor - or something along those lines.

1 Like

FColor is smaller. So a FLinearColor buffer should be way bigger. This is doing the opposite. But I think you’re on to something here. I think the artifacts are happening on textures where all three RGB channels are the same. I already checked the source files way back. They’re all 24 bit pngs. So UE must automatically convert them to grayscale.

That would mean these are one byte textures. I’d need to check if they are grayscale. Let me see if I can detect that.

1 Like

AWWW YISSS!!!

Thanks for your help! I am so happy right now :slight_smile:

This is my roughness texture.

const int SizeX = InTexture->GetSizeX();
const int SizeY = InTexture->GetSizeY();

const int sz = SizeX * SizeY;
OutData.SetNum(sz);

ETextureSourceFormat pf = InTexture->Source.GetFormat();
if (pf == TSF_G8)
{
  const uint8_t* RawImageData = reinterpret_cast<const uint8_t*>(InTexture->Source.LockMipReadOnly(0));
  for (size_t i = 0; i < sz; i++)
  {
    OutData[i] = RawImageData[i] + (RawImageData[i] << 8) + (RawImageData[i] << 16);
  }
}
else
{
  const FColor* RawImageData = reinterpret_cast<const FColor*>(InTexture->Source.LockMipReadOnly(0));
  for (size_t i = 0; i < sz; i++)
  {
    OutData[i] = RawImageData[i].R + (RawImageData[i].G << 8) + (RawImageData[i].B << 16);
  }
}

InTexture->Source.UnlockMip(0);

edit: Added missing UnlockMip(0) call at the end.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.