Access UV float bits in shader

Hello, I have a shader where I’m trying to access the bits of a UV float value with a custom HLSL node. It would be useful for packing data into UVs more flexibly. The code starts out with a simple test:

uint bits = asuint(inputValue);
return (bits & (1 << n)) != 0;

Where I change n to be whatever bit I want to look at. I route the output to the Base Color, which just shows me if the bit is either 1 (white) or 0 (black). It all works perfectly if I input a scalar parameter- all 32 bits are there in float format just as I’d expect them. However, if the input is routed from TexCoord, things get weird. Note that this a simple mesh with all UVs set to the same val using the geometry script library. So what happens is, only the leftmost 18 bits in the float are accessible. Any after that, the output shows as sort of a half and half noise pattern. I thought it might have to do with how unreal mesh UVs are by default only 16 bit, but changing them to high precision (32) for the mesh did nothing. The other weird thing us, those first 18 bits are correct, but only if the float was formatted as a 32bit float. The exponent part is 8 bits. If it was a 16 bit float like it should be, the exp should be only 5 bits. Anyway, this doesn’t really effect what I’m trying to do - I can work with just the 18 bits. I’m just upset because I don’t understand this behavior. Would anyone be willing to share some insight?