i have all these questions
In the official tonemapper feature highlight video (00:42:50] Deviation from ACES standard) they said changing the values can be used to simulate different film stocks
1- it would be great to have film stock presets in the film section postprocess , how i can suggest this feature to ue4 makers ?
back in the days when i was using octane render, it had a nice list of film response curve / film stock presets
incase i don’t want to wait until they make something like this or incase they are never planning to make presets
2- where i can find a database for film stock contains response curve values so i can enter it manually in ue4 tonemapper/film section?
i tried searching in a lot of places and also in wikipedia but with no clue
i really don’t understand the math behind this it’s to complicated for me, but i want at least to know the type of curves
3- is cinema logarithmic curve profiles is completely different from ACES s-curve or both are just gamma curves with different shape ? is ACES s-curve a logarithmic curve ?
in other words can cinema logarithmic curve profiles such as (canon c-log or sony s-log ,etc) achieved by entering the values manually
through ue4 filmic tonemapper inputs (slope - Toe - shoulder - etc ) ?
if no, why ue4 devs won’t give the user the freedom of shaping the curve to any type they want through a graph controler or at least through type presets (S curve - Logarithmic - Linear )
instead of giving them operators/inputs that can shape only one type of curve (S curve),
i would really prefer using logarithmic curves like sony s-log
4- how much dynamic range ACES s-curve profile standard capable of ? (the default values in ue4 post process tonemapper/film section)
is it also 25 STOPS like “filmic blender” since it’s also based on ACES standard too ?
Full Story: ( you can skip this if you already know most of the terms above)
Note: this is not facts, facts can be found in reference section i included below
this is just my way of understanding things, and i hope anybody correct me if you find anything wrong so i can understand it better
My journey of understanding how (Dynamic Range) works in 3d renders and unreal engine:
one of the things that make cinema cameras so special it can capture wide amount of dynamic range close to the human eye
which is one of the key ingredients that gives movies their incredible look and image details compared to any recorded video on Dslr or compact camera
wide dynamic range basically take a real life shots to whole new level, imagine what it would do to 3d cinematic oriented toward photorealism
i knew how dynamic range works in real life cameras or at least the factor that gives the camera the capability of wider dynamic range
but when it came to 3d render it was a complete mystery.
as far as i know in reality the bigger sensor size the wider dynamic range it’s capable of seeing, it’s that simple,
but this is not the case in 3d software ,so i wondered how this is works in 3d rendering software and how i can achieve this kind of dynamic range in a game engine
So i came across this nice article by a blender guy,which explained a lot of things
blender had a serious problem being capable of only 8 stops dynamic range due to it’s postprocess pipeline,and here is what i understood of how it works
the 3d software renders content in (HDR) and working with HDR data implies linear color math, then the image have to be transformed to sRGB color space (LDR)
losing a lot of details in the highlight and shadows in the process, so someone implemented to blender something similar to / based on ACES standard and called it “Filmic Blender” allow you to go from 8 stops of dynamic range, to a massive 25 STOPS
in real life when you shoot with camera yo have 3 options
shoot in sRGB video mode which convert the raw data directly inside the camera to sRGB, this method is similar to old blender (lose a lot of details )
shoot in Raw mode , which save the sensor raw data , which gives flexibility of doing tone mapping and color grading in post production (HDR linear space )
shoot in log mode , log stands for logarithmic. it allow HDR data to be stored/baked in LDR format (bake HDR details into LDR )
there are many types of log mode such as Canon c-log, Sony s-log , which is very cool method if not the best in my opinion
to this moment i had no idea what is tone mapping and what it means i just understood little parts of the concept
then i came across this to find out ue4 have already adopted ACES Filmic Tone Mapper while ago
now i understood that Tone mapping is the process of converting (HDR) Linear image to (LDR) that a traditional sRGB display can output
so i came to conclusion that the main factor that controls achieving wide dynamic range in a 3d render final image is the (ToneMapper)
not mention it surely must be color graded after the tone mapping process
but i asked my self what if i wanted to get something similar to a camera sensor raw data but in unreal engine
i have to capture the image before it’s touched by the tonemapper
so i searched until i found this cool option, you can consider it the raw of ue4
it can be found in the viewport under View mode menu > buffer visualization > PRE Tonemap HDR Color
when i exported it in EXR format which is very similar to RAW format, and opened it in photoshop
it was the only buffer that had the highlight and shadow details preserved
i could crank the exposure up and down and see details in the shadow and highlights areas get revealed.
the other buffers highlights and dark areas had no information saved even when exported in EXR,
Because their data is already lost in the tone mapping process
the downside of this unique buffer is that you lose all the postprocess effects
https://www.youtube.com/watch?v=k-yXYVgDDk0 (The Difference between Log and RAW )
https://www.youtube.com/watch?v=pPA2-HBi_iY (Raw Explained )