Post processing volumes could be often used to simulate some kind of eye/camera adapting to a new enviroment, but currently the linear interpolation by distance is not enough for providing a smooth blending over post processing values in a believable way.
Even if a smoother behaviour could be implemented by blueprint manually blending the volume weight, it requires a manual setup made by user for something that could be standardized being an overal improvement for everyone.
I propose a settable render variable used to interpolate smoothly and automatically the volume weight if a flag in the post processing volume is set.
A default value for this variable may mimic the auto exposure timings to give a more coherent adapting feeling and the interpolation function may feature a more pleasing curve than the simple linear interpolation (possibly customizable from a curve asset?!?).
A big pros will be to be able to make smaller volumes that fit better over wanted areas instead to make them too big just to provide some kind of blending.
Example scenario:
even if a post processing volume made in a road tunnel with the actual system could not account for both the speed of car and the speed of a walking pedestrian resulting in a too arsh visual change from the car point of view if radius is too small or in a very slow blending far away from the wanted area from the pedestrian point of view who may also not want to enter in the tunnel at all just because the radius is set to account for the speed of the car.
On the other end the proposed solution may provide a volume that will just kick in if the camera get in the tunnel and not just near to it and feature a good looking and believable camera speed agnostic blending that feels like auto esposure.