I’ve been looking at the ray generation shaders for both the Ray Tracer and the Path Tracer. These seem to be very different approaches under the hood.
**Path Tracer: **The ray generation system under the path tracer is pretty straight forward with the pinhole camera sensor.
**Ray Tracer: **I have been looking for something similar to the path tracer within the Ray Tracing system and I can’t find anything equivalent. Instead, it looks like each feature has its own ray generation shader (RGS)? For example, if you look at RayTracingSkyLightRGS.usf, you can see that it has its own ray generation shader. And there’s lots of these independent ray generation shaders from what I can tell.
So, I’m trying to apply a lens distortion effect based off of a physical lens model and I am trying to do this by modifying the direction the rays are being casted by the ray tracer. It’s relatively easy with the path tracer because there is only one ray generation shader. As a proof of concept, here is a dumb distortion effect I created with this technique using the path tracer:
You’ll notice that I apply some pretty extreme distortion on some parts of the default grid and there is no artifacting as you’d get with a post process distortion effect. I think this proves that ray tracing is the ideal way to go for modeling camera lens distortion without artifacting.
So, if I wanted to produce this lens distortion effect with the Ray Tracer, would I have to go into every ray generation shader and modify the direction the ray is cast? Or is there a single ray generation shader that casts the rays into the world from the camera position, which the other shaders use as a baseline reference? Would it be crazy to hope for a future engine level feature which lets us fiddle with ray generation shader values as an injected step during the rendering pipeline? ie, the ray generation shader is like a virtual function which can be overridden by a user specified function.