My understanding is that BaseColor requires a float value for each of the R, G, B, and A channels:
However does anyone know the level of precision this float value can take before it is ignored by the engine? For example in the screenshot below, notice how I have set the R value to 0.431746, but on the blueprint it displays 0.432.
Is this apparent rounding in the blueprint node purely for visual purposes only, or is there actually a finite level of precision for BaseColor float values?
Because if BaseColor really can store data with 6 decimal places, then that provides 1 million possible values for each of the R, G, B, and A channels, resulting in 1 septillion (1,000,000,000,000,000,000,000,000) possible colors, which I imagine would take a prohibitively large amount of memory.
So if the float values are indeed rounded to a finite level of precision:
What is this level of precision?
How many unique colours can I specifiy for each channel? I.e. is 0.431756 different to 0.431757, or are they interpeted as the same value?
How can I define a BaseColor and ensure with 100% certaintly that this color is unique from another color? Since BaseColor values are clamped between 0 and 1, I assume I could define the colors with integers (which are then normalised by then engine into floats), but at what point will my unique integers be normalised into identical floats by the engine, and therefore cease to become unique colors?
If floats are rounded by the engine to a reduced level of precision, why is it even possible to specify a greater level of precision (i.e. 6 decimal places) if such precision is ignored?
I understand these questions might seem trivial, however the reason I’m seeking clarification is to allow a logical comparison of different BaseColors. I need to be able to define a BaseColor using numerical values, and when converting the BaseColor back to numercial values, the values must always be identical and pass a logical == test. I.e. there must be zero data loss during the forward and backward conversion to BaseColor.
So it looks like the colors are stored as 8-bit unsigned integers (range 0-255), despite the screenshot below suggesting they are stored as floats (range 0-1).
There must therefore be some background conversion process, where the engine converts normalised float values (range 0-1) to integers (range 0-255), stores them as integers, then converts them back to floats again (range 0-1) when the values are later extracted.
I guess I can live this this, and just need to put faith in the engine’s ability to convert int → float → int without any data loss.
It just feels very bad practice carrying out comparison logic on integers after they have been converted to a normalised float then back again. It goes against everything I have learnt about the pitfalls of comparing floating point data (see articles below), especially after the data has been rounded for the purpose of normalisation. I just worry that somewhere during the forward or backward conversion process, some rounding occurs which results in a different before and after float value.
This wouldn’t normally be a problem for most use cases, where the data is imputed but never exported, since it is unlikely the user would ever notice such a minor change in color. But it does become a huge potential problem if data is exported (i.e. for further calculations and game logic), and colors have randomly and unexpectedly changed during the conversion process.
But…that said, it’s not a deal-breaker, if this is how the engine works. I guess I will just need to create lots of testing and validation logic to ensure there is no data loss
Hopefully in the future it would be possible to specify 0-255 integer values directly if this is how they are stored by the engine.
Note that this is all based on how the graphics card and shader model works.
When texture data gets uploaded to the graphics card (as well as vertex data,) it is generally “converted” to one of a few supported formats. One of those is “8 bit normalized to 0 … 1.”
Even though the physical representation may be one of 8 bits, the graphics card will present this to the shader as-if it was a normalized 0…1 value. The magic of the graphics card ALU hardware and shader compiler makes this happen – no actual “conversion” happens at runtime, it’s just a matter of how the bits are interpreted by the hardware.
Note that you can have textures with 16 bits integer per component resolution, and textures with 16 bits floating point per component resolution, and even 32 bits per component floating point resolution. Or, going the other way, textures with compressed formats that use 4 bits per pixel, but might give you an effective precision of perhaps 5 bits per component for “most normal cases.”
Programming graphics shaders requires a fair amount of understanding of what the shader model promises, and how it works. It’s not the same as an IEEE FPU in a CPU. (And, by the way, neither are the SIMD registers, although those are pretty close these days, most non-conforming behavior being opted-out for performance reasons.)
Also, some APIs for programming graphics cards may even provide for 32-bit and 64-bit floating point IEEE-like values – for example, CUDA for NVIDIA GPUs. But that doesn’t mean that the same GPU, when used for computing pixel shaders, will use IEEE mode. Only the limitations guaranteed by the specific shading model, can actually be relied upon. For the typical current game, this is shader model 5: Shader Model 5 - Win32 apps | Microsoft Docs
Essentially what I am trying to do is find a reliable method to check whether two colors are the same. In other words:
Define a BaseColor channel value using numbers (e.g 0.4575)
Take a SceneCapture
Read the chosen pixel’s BaseColor channel
Check if the value is the same (i.e. 0.4575), and if so, return TRUE
However, what has become clear from this discussion is that conducting such a test is by no means trivial. As you point out, there are many nuances which might cause values to change somewhere during the process, causing any dependent logic to fail.
I guess the only practical solution would be to assume 8-bit unsigned integers, then implement some watertight testing and validation at every step to ensure data is as expected
Particular platforms will make particular optimizations. If you want to test strict equality, then you’re in for a lot of pain – different platforms treat anti-aliasing, texture filtering, and rasterization rules differently, and different hardware on the same platform (e g, AMD vs NVIDIA vs Intel on PC) do different things, and even different generations of the same vendor on the same platform may change this.
If you’re doing this to try to carry some other kind of data forward – for example, some kind of ID – then you may be better off using a parameter that’s explicitly a “floating point value” or “integer value” with a defined precision. Unfortunately, not all of the capabilities of the underlying shader model are exposed by the standard Unreal shader graph, because what you can do on a PS/4 is different from a modern PC is different from a $50 Android featurephone.