Access UV float bits in shader

Hello, I have a shader where I’m trying to access the bits of a UV float value with a custom HLSL node. It would be useful for packing data into UVs more flexibly. The code starts out with a simple test:

uint bits = asuint(inputValue);
return (bits & (1 << n)) != 0;

Where I change n to be whatever bit I want to look at. I route the output to the Base Color, which just shows me if the bit is either 1 (white) or 0 (black). It all works perfectly if I input a scalar parameter- all 32 bits are there in float format just as I’d expect them. However, if the input is routed from TexCoord, things get weird. Note that this a simple mesh with all UVs set to the same val using the geometry script library. So what happens is, only the leftmost 18 bits in the float are accessible. Any after that, the output shows as sort of a half and half noise pattern. I thought it might have to do with how unreal mesh UVs are by default only 16 bit, but changing them to high precision (32) for the mesh did nothing. The other weird thing us, those first 18 bits are correct, but only if the float was formatted as a 32bit float. The exponent part is 8 bits. If it was a 16 bit float like it should be, the exp should be only 5 bits. Anyway, this doesn’t really effect what I’m trying to do - I can work with just the 18 bits. I’m just upset because I don’t understand this behavior. Would anyone be willing to share some insight?

Update - I think I understand some of the problems, details below. And if anyone finds this useful, I was able to write BP code to write specific binary values to UVs, then decode the binary data in a shader. It’s possible to pack a bunch of bits in a UV value to act as flags in the shader, or even write multiple values to a single UV value. If anyone’s interested I’ll post the code.

First of all, the noise pattern can be fixed by putting a VertexInterpolator node at the output of the custom hlsl node. That just makes sense anyway, you’d never want to decode a UV value per pixel. Once I did that it became clear that indeed the UV values were 18 bits.
I’m guessing they are half floats, which if you look at the standard float bit structure, exponent and mantissa, 16 bit floats have 1 sign bit, 5 bit exp and and 10 bit mantissa. The hlsl converts them to f32, probably making the exp 8 bits and keeping the sign bit and 10 bit mantissa - thus 18 bits. Wait no, that’d be 19 with the sign. I don’t know, but also the largest UV val you can write with a geometry script blueprint is 65504, which is the largest val of a 16 bit float, so I’m pretty sure that’s the limitation here. And whatever ‘high precision’ 32 bit UVs are in Unreal, that must refer to something else.