How to read RGBA from any texture?

Is there a way to read RGBA values from any Texture 2D? It seems this is something often requested, but the various methods posted so far just don’t work.

This returns null except when using certain very specific texture settings. And even when it works, the data is too small.
RawImageData->Lock(LOCK_READ_ONLY)

RHILockTexture2D() doesn’t work. The data is always too small even on uncompressed textures.

What is the proper way to read RGBA values from a texture directly? I don’t care about speed and it can be editor only. I just want to write an editor tool that merges a bunch of tiles together into a single texture. Ironically, writing seems to be no problem.

There’s a blueprint node, read texture 2D, or something like that…

I can’t find any such node.

I know there’s another node, but can’t find it right now. In blueprint, you have to go the long way around

Yeah, I’m already rendering into a texture and doing ReadPixels. Problem I’m having now is that the output is blurry and I have no idea why. It shows up crisp when I first double click on the output texture. But as soon as I save, it gets blurry.

Have you got nomips set?

Makes no difference. I do use mips here though. The mip levels are way blurrier than they should be.

And it’s the resolution you’re expecting?

Yeah. It’s really weird. Sometimes when I load the texture up in the editor after it’s generated, it looks fine. But when I hit the save icon, it goes blurry.

Here’s one run where one tile is the proper resolution, but the others are not.

But when I save it, it changes to this:

The exact same code generates all the tiles. The material to render the texture uses LOD0. Oh wait… these can be streamed in, right? That’s likely the problem. Let me force them to load all LODs. Thing is I have like 50 textures. I don’t want to change them one by one.

You might find it easier with a tool?

These are both good

Yeah, it was the streaming functionality that was making it blurry. There’s an option in the settings to disable it. So I have to remember to disable streaming when generating these textures.

1 Like

The entire reason for having my own code to create the tiles is I have to see if I can write the mips myself including the highest resolution. If you just use tiles like that, you’ll get color bleeding from adjacent tiles. So I need to add a border. It’s similar to what virtual textures does, except the border is used for the exact opposite reason. In SVT, they copy neighbouring tiles over into the border. I don’t want that. I want to avoid going into the adjacent tiles so the border is just the outside pixels of each tile duplicated.

1 Like

I guess rendering to a render texture and doing read pixels works for now. Wish there was a read pixels for regular textures.

Hi AlienRenders,

I’m the author of rdTexTools - the way I read the pixels is:

uint8* data=(uint8*)myTexture->Source.LockMip(0);

That’s been working in UE4.25 and up.

That doesn’t work for me. It’s null.

edit: Well, it works on some textures. Like RGBA uncompressed. But not much else.

  const int SizeX = InTexture->GetSizeX();
  const int SizeY = InTexture->GetSizeY();

  const int sz = SizeX * SizeY;
  OutData.SetNum(sz);

  FTexture2DMipMap& MipMap = InTexture->GetPlatformData()->Mips[0];
  FByteBulkData* RawImageData = &MipMap.BulkData;
  FColor* FormatedImageData = static_cast<FColor*>(RawImageData->Lock(LOCK_READ_ONLY));

  for (size_t i = 0; i < sz; i++)
  {
    OutData[i] = FormatedImageData[i].R + (FormatedImageData[i].G << 8) + (FormatedImageData[i].B << 16);
  }
  RawImageData->Unlock();

This is what I was using, but the pointer is almost always null. I only use it to update my lookup texture, but that’s uncompressed.

And here’s another function I have that doesn’t work except for uncompressed.

const int SizeX = InTexture->GetSizeX();
const int SizeY = InTexture->GetSizeY();

const int sz = SizeX * SizeY;
OutData.SetNum(sz);

const FColor* RawImageData = reinterpret_cast<const FColor*>(InTexture->Source.LockMipReadOnly(0));
for (size_t i = 0; i < sz; i++)
{
  OutData[i] = RawImageData[i].R + (RawImageData[i].G << 8) + (RawImageData[i].B << 16);
}
RawImageData->Unlock();

Can you confirm that “InTexture” is valid? Like I said - the Source.LockMip(0) has worked 100% of the time in all test and live scenarios for quite some time.

Yeah, InTexture is valid. It’s the same tile textures I’m using above. They’re assets. With dynamic textures, I have to call

  InTexture->UpdateResource();
  FTextureCompilingManager& CompilerManager = FTextureCompilingManager::Get();
  CompilerManager.FinishAllCompilation();

But it still doesn’t work unless it’s uncompressed.

If it’s out there, I’ve tried it. I also tried RHILockTexture2D(), but that has the problem that it gives you the compressed buffer directly, so I can’t use it unless I find a decompressor but that seems like a lot of work.

Hmmm… I wonder if the streaming functionality wasn’t interfering with it. Let me try it again with streaming disabled.

edit: Nope. Pointer is null.

Just a note about the compilation - for 5.1 and above, you’ll need to submit any “on the fly” shaders too:

#if ENGINE_MAJOR_VERSION>4 && ENGINE_MINOR_VERSION>0
	FWorldContext* worldctx=GEngine->GetWorldContextFromGameViewport(GEngine->GameViewport);
	if(worldctx) {
		UWorld* world=worldctx->World();
		if(world) {
			UMaterialInterface::SubmitRemainingJobsForWorld(world);
		}
	}
	FAssetCompilingManager::Get().FinishAllCompilation();
#else
	GShaderCompilingManager->FinishAllCompilation();
#endif

I don’t know if that will fix this though…

1 Like

I added that code and it compiled about 200 shaders. And the pointers weren’t null anymore, but the memory from the textures were too small. Some texture buffer look like they had repeated smaller tiles.

Fairly certain if I ran this in debug mode, the debugger would catch memory overrun exceptions.

edit: That’s my bump map. Supposed to be completely black or white.