How data stored in HDR scene capture

I’m trying to pack two LDR RGB images in a single HDR image, a render target for a scene capture component.

The logic is rather simple: one image is stored in the fraction part of the values while the other is multiplied by 256, floored and added to the first one (so stored as integers). While the packing/unpacking logic works when tested inside a shader as soon as the data is rendered to texture I run into an issue: the fractional part is all messed up on unpacking, basically all I see there is noise.

So I was wondering about how the HDR scene capture target texture might store the data.

As a test I produced a single color (0.1,0.2,0.3), captured it and read the texture in another shader where I compared that color to the original. There were differences on each channel ranging from 0.0000244 to 0.00019. Interesting.