Packing data values - mushing specular/metal into a single channel on a RVT

I appreciate anything involved with this kind of thing results in a loss of fidelity but I wonder if someone out there knows how to do this (if possible).

In my RVT, I’m playing around with packing different values into channels, we all do. However, I always seem to be ONE short of what I really want, knowhatImean??

I’m already using the mask channel for something I cannot change, so I had to ditch metal as it’s typically least used, but I still like to play around with it; tiny amounts of metal can make a lot of different things really pop.

SO! I tried playing around with packing values into a single channel, to be carried over the specular-channel of a RVT. I just can’t seem to get it quite right, so wanted to reach out and see if there might be a community solution for something like this?

For my part, I tried a few things. I DON’T get the HalfIntToFloat/FloatToHalfInt nodes, although I suspect they are what I want, or at least are named for what I want to do, pack a couple things into a Float to carry over the RVT, etc…

I did try taking the specular and getting 2digits of decimal information (eg 0.00 → 0.99) and the same with metal. For specular, I multiplied it by 100 to shift’ the decimal portion to the left; 0.99 becomes 99.000, then used truncate to clip off the decimal (just in case), and then multiplied it by 0.01 to ‘shift’ it back to the right. Similar with metal but just grabbed the already fractional value w/frac and then shifted it to the right. Then added them to create a value like .9999 for .99 specular and .99 metal. I could decode it on the other side, it seemed so sort of work, but I suspect I am not thinking it through properly…

I wonder if this could be better done via a custom-node like Int2HalfFloat?

Thoughts/Direction? Thanks in advance for anyone that can assist; appreciated.

ref:

I did some experimenting with this method, both packing 3 values and using binary encoding.

In principle it worked fine when only dealing with it as a float value, but when you write it to a texture I had two conclusions.

  1. texture filtering will mangle your values so you need to essentially do nearest neighbor sampling to keep it from garbling your encoding so this will result in a pixelated look.

  2. compression will also scramble your values so no luck there.

Some other observations.

Digital encoding as binary was more resistant to noise and I was able to partially reconstruction the binary encoded data to some degree in cases where the decimal shifting alone got completely scrambled due to floating point shifting.

So In short I don’t think this method is practical with textures due to texture filtering and compression. If you were able to disable texture filtering on one channel maybe you could interpolate it through the vertex shader to smooth it out. But I doubt you could do that with just one channel.

One hack maybe to encode a binary switch by comparing the values in two different channels?

1 Like

Agreed, it seemed like the compression was just ‘eating’ data. Even though I would have accepted a fairly low level of fidelity vs the 65k values pack-float delivers, I wasn’t getting stuff out the other end that matched what I put in.

Thanks for the sanity-check. Best of luck out there.

If it helps, I used this method to encode extra float values into the primitive data system, which I think caps out at 36 float values in total. So your idea is perfectly functional for working within the material or blueprint itself, it’s just the texture end complicates things. I’ve done a lot of weird things like this and if you wanna chat more you can probably message me on discord, I should be lurking in unreal slackers if you want to see the code i did for binary encoding or the three float packing.

Good2know.

I’d need this to capture 2 floats on one channel into the RVT, so it could be read back out into the heightmesh. I capture Basecolor, Rough/Specular, and Normal, but need to use the mask channel for a mask. Ideally, if I could get one more float into the virtual-texture for metal, I’d be golden. It’d be something I can reference for blending into the landscape, have the full-compliment of PBR information and still have my alpha-layer for my effects, et-al.

Primitive-data would be great in and of itself, but I need to have object-X (the landscape) write it’s output to the RVT for sampling in object-Y, which with my current understanding is not possible via materials? Primitive-data references the object itself, no (asking)?

I did a quick test, if you just need a binary mask for metallic, I tried something that almost worked but leaves a bit of a boundry mark…

Take your greyscale makes (which will be between 0-1) (well as a safety factor you could multiply it by 0.99 or whatever to knock it below 1)
and your metallic binary (which will be 0 OR 1) and add them together
the result will either be un-changed or a value between 1 and 2

But we can’t save anything beyond 1 to a texture so divide it by 2 to bring the whole thing down between 0-1

Then this gets rendered to the render target

To decode we just multiply by 2 to expand to 0-2 then we mask if it’s above 1, then subtract that from the original…

The resulting is the greyscale texture, and the above 1 is our binary…
This looks like it works but it has a thin 1 pixel outline, turning off texture filtering didn’t seem to help, adding compression to the mix made it looks pretty rough. Ah well it was worth a shot!

edit: on the primitive data that was just kind of a tangent, I was just saying if you ever need to pass more data via just pure float handling then some of this trickery works for that, it just seems texture compression and filtering mess up my schemes.

It’s possible 1 bit encoding may be possible to be honest but binary encoding was kind of expensive, I can show you my code if you DM me on discord.

I used to lerp between multiple vaues like 0, 0.25, 0.5, 0.75, and 1 to pack multiple alphas. It worked rather well until you started seeing bleeding when it mipped out at a distance. Variations on this did produce the same kinds of borders and the like. Doesn’t seem like this is going to work in practice.

For my purposes I’d want the full float. Thanks for the input.

How about using the normal map Z (B channel) instead then use the derive normal z after on the r and g channels of the normal map pin coming out of your RVT?

You can do that but it seemingly happens already under the hood?

Any time I tried it you get the Z-axis but the results are not seamless, it’s very speckly, shimmery due to mipping and the finite granularity of the RVT; doesn’t work.

Anything I stuck into the Z-channel was garbled on the other end, so I think Unreal owns that particular thing, totally.

I’ve not tested this in 5.3 so try it, maybe it works?

I moved on to using alphas in the RVT and painting textures on the heightmesh directly. At least that way it’s at whatever texel-resolution I want and doesn’t depend on finite-granularity in the RVT.

I noticed this as well when I tried it. It is ironic because if you read their documentation on the RVT it says that the normal doesn’t get any transformations (they say this from like 4.2-present): Runtime Virtual Texturing in Unreal Engine | Unreal Engine 5.3 Documentation

Using the RVT for just alpha masks did you actually gain anything from having it? That doesn’t seem like a very calculation heavy thing to front load to runtime.

So the two approaches I see are to either push PBR, or non-PBR information. The former is limited to the number of things you can ultimately push. No dedicated metal, AO, etc, etc. You would still want to track where what ‘layers’, parts, rock/dirt/etc are to be able to append that info on the post/read-RVT-side. There is a Mask channel for this, which can help but since you are already dipping into tracking-alphas, I ditched the PBR altogether and was able to use the remaining channels to track a decent number of things. 5 channels (RGB, Spec, Rough), 6 if you use the mask. Using the Normal channel for the vertex-normal-ws of the landscape, I can use that on the other side in a variety of ways (particularly triplanar mapping).

As well, since you are putting the texture-samples for the draw on the read-side, you can scale them to whatever texel-density you want/need, and aren’t limited by the fidelity of the RVT data-object.

The trade-off is that to be able to blend meshes with a landscape, you have to either use a dithered-solution, or redo the maths that draw the landscape, but in the mesh; there is no PBR to sample from in the RVT.

It CAN work depending on the approach, but depending on the results you want with the mesh, be less or more costly.