Article written by Matt O.
When working with colors and color spaces in Unreal, it’s important to understand what happens to those colors as they move through the render pipeline. We’ll go over, in overview, when and how values are converted from sRGB to Linear, what happens in postprocessing, and what happens when we get to our final pixel.
Color Space - The specific colors that will be produced for red, green, blue, and white. It is possible for different color spaces to be defined, which will cause 1,0,0 (for example) to be a more saturated color in one color space than another. Examples include Rec709, Rec2020, sRGB, ACES 2065-1, etc.
Color Encoding - How values are represented numerically, and how much light should be produced by that representation. Examples include Linear, sRGB, Gamma, PQ (ST 2084), and various vendor specific log encodings.
Gamma - A non-linear color encoding used to encode luminance values. The equation can be expressed as OutValue = A * InputValue ^ Y. Where A is most often a constant 1.0, and the Y value is what we think of as a “Gamma Correction” value.
Gamut - The subset of colors that can be represented by a color space or output device, or the set of colors found within an image at a given time.
Linear Color Space - A color space with a linear color encoding. The amount of light represented is directly proportional to the stored value. Therefore, doubling the number will produce twice the brightness.
sRGB - In computer graphics (and in Unreal Engine), sRGB refers to a specific color space and also a specific color encoding. It is possible to have Linear sRGB, and also to have sRGB encoded sRGB. This term can cause confusion if it has not been made clear whether the context is about color space or color encoding.
There are, typically, four stages in rendering:
- Take texture input and bring it into Linear encoded color space
- Determine primitive visibility, how light interacts with the surface, placing the result into Scene Color
- Then, postprocessing everything from Scene Color
- Finally, display that into viewport device
The implicit working space of the engine is Linear sRGB. Textures are imported into the working space by converting from the encoding present in the texture. This is currently handled by the sRGB checkbox on the texture, which indicates that the texture file is either already Linear (flag is off) or the file has a sRGB encoding (flag is on).
Rendering and postprocessing all proceed in linear space right up until we hit the Filmic Tone Curve (part of the tonemapper). The Tone Curve squeezes the large dynamic range of a scene down to 0.0-1.0 range.
Finally, we convert from linear (with look applied) back to sRGB encoded pixels for display.
So, given a PNG file encoded in sRGB, we’ve got a gradient from 0 to 255 (here represented by this blue line).
When those values are read, they’re converted through an sRGBToLinear function, which lowers the values.
This means that an input value of 128 (or 0.5) becomes a linear value of .214.
This in turn allows us to represent a lot of very dark values even though our input file has a low number of bits. When the data becomes linear, more bits are required to represent the linear values. We use 16-bit floats during the scene rendering, which allow us to simultaneously represent very small dark values and very high bright values.
If you’re looking at the various render stages in something like RenderDoc, or the GPU Visualizer, we’re doing all this in linear space. When we say “Linear Rendering in RGB”, we mean everything highlighted here:
This is all that has to happen before we can hand off to postprocessing. What this means is that right before we start postprocessing, a color value specified in an unlit, opaque, emissive material is exactly that value in the SceneColor buffer.
Post processing is a sequence of phases, or passes. For this discussion, we will separate the Tonemapper from the preceding set of post processing passes that include operations such as temporal anti-aliasing, depth of field, motion blur, etc. These passes operate on and produce linear pixel values. The Tonemapper converts the linear scene to a signal which can be sent to a display device.
The Tonemapper is a sequence of steps that modify the pixels for output. They are an emulation of operations that would normally happen in a camera lens and its imaging sensor. The steps listed below are all part of the Tonemapper.
Sometimes we’ll want to turn off the “look” of the tonemapper. However, if we turn off the tonemapper with the console command showflag.tonemapper 0, all we’re left with is the output device conversion step, which may not be desirable, especially since that also affects things like bloom, vignette, and color correction.
We can, however, control the contribution or effect of each of those individual features in the postprocess settings of either a volume or on the camera.
Note: prior to 4.26 it was not possible to control the tone curve in postprocess settings.
The Tone Curve can be disabled by setting its contribution to 0.0. Typically, users that want the tone curve to be turned off probably also want to turn off the Expand Gamut color correction as well. That operation is there to coordinate with the Tone Curve by increasing the saturation of colors that are already somewhat saturated. This causes them to look better after the tone curve is applied.
The Filmic Tone Curve has parameters that change the overall effect of the curve, but we do not have a parameter to replace the tone curve directly. If you require an entirely different tone curve, you should consider using OCIO as described below.
Currently, UE has two HDR Display targets, ACES1000nits and ACES2000nits. Handling output to HDR displays occurs in the “Output Device” step. When these targets are activated (via r.HDR CVars), the Filmic Tone Curve is turned off, and a different tone curve is applied, along with a different device output encoding.
In 4.26 we also implemented support for OCIO color management in the engine. You can read more about how to set up and use it in our documentation. But how does that fit into the color pipeline we’ve discussed so far?
The role of OCIO color management is to provide consistent color conversions in various vendor tools. In the UE case, OCIO takes over the responsibility of the Tone Curve and Output Device. When it is enabled, as shown below, the Tone Curve and Output Device are disabled automatically.
It should be noted that although OCIO is pictured next to the Tonemapper, it is independent of the Tonemapper. If you choose to disable the Tonemapper via the Showflag, OCIO will still occur, and will receive the linear SceneColor. If you choose to use a Post Process Material that is ReplaceTonemapper, OCIO will occur afterward, and you will need to carefully manage the result of your PPM because it will become the input to OCIO.
When you use a Post Process Material that is AfterTonemapper, it is called after the Tonemapper has processed the pixels. This means that the PPM AfterTonemapper will be operating on display pixels. They are no longer linear, because they have the Output Device encoded into the values.
When OCIO is enabled, the PPM is likewise after OCIO. Therefore, the PPM still is operating on display pixels, which will be the display pixels that resulted from the OCIO conversion.