In terms of performance I mean.
From what I know about how GPU raytracing works, I doubt it would outperform planar reflections on a scene with only one reflective plane, but I would love to see some benchmarks.
In theory a full resolution planar reflection and a 1-sample-per-pixel RT reflection would shade the same number of pixels. Both approaches are doing different kinds of extra work. Raytracing goes through the whole actual BVH tree traversal, ray-triangle intersection, dispatching the hit shaders. I don’t know how that plays with cache coherency, since different rays can hit entirely different shaders, but it’s all self-contained on the GPU. Planar reflection is rasterizing again from a different POV, so there’s visibility determination, draw calls, all that goes on the CPU to lessen the GPU work. So it’s dependent on which one becomes a bottleneck first.
Now, raytracing should win compared to using multiple planar reflections, because the cost of setting up the reflected scene adds up fast.
Assuming you have some kind of RTX card, then probably still no
However, planar reflection is usually just one plane positioned along vast, flat surfaces in your level, like water plane, or flat ground plane. It really covers just a single plane. Raytracing can reflect on anything on the entire screen, without having to mess with any reflection probes or bearing the limitations of SSR. That being said, since raytracing also reflects stuff off the screen, that means view frustum culling is no longer an applicable technique. I wonder how that works…
Basically, you have a representation of the entire scene on the GPU at all times. If you think about it, that is already the case, since shaders, vertex buffers, and textures are all on GPU memory anyway. For RT you add the locations of the meshes and their bounding boxes, and which shaders they are using, which are executed when the rays hit a triangle.
This means many view-based optimization tricks used during rasterization like deferred shading or clustered forward lighting don’t apply during raytracing so new tricks need to be developed to optimize that.
Shading certainly will be big challenge, especially if there is need for accurate materials and lights to be dynamic.
As for optimizations using ray-tracing doesn’t really mean that everything must be correct and identical in reflections.
I’m quite excited to see how developers will start to exploit ray tracing hardware by using them with all sorts of distance cubemaps and light probes. (IE. True parallax corrected cubemaps with characters rendered within.)
The actual ray traversal occurs in parallel with graphics in a separate part of the chip, so you don’t really pay for that unless you over-utilize it and create a sync bottleneck. It does have to run a shader to check if a ray-bounding box hit is also a ray-triangle hit, and then of course it has to shade the sample. That is the real cost. And those bounds are theoretically pretty tight, it breaks your meshes into little dense chunks of triangles with their own bounding boxes in the hierarchy. The exact algorithm used for that is implementation specific, not defined by the spec, so it could vary and we don’t know the exact details. But the BVH Nvidia uses seems to keep the triangle intersection test count pretty manageable, judging from how well games like Battlefield run with high geometric complexity and large maps.
It won’t outperform a single planar reflection in most cases, because a zbuffer with depth sorting is still going to be more efficient for minimizing shading sample count, and planar reflections have a view frustum that is at most as big as that of the main camera, so the potentially visible set is smaller, with less memory that might need to be accessed. Where ray tracing wins hands down is for multiple reflection planes, which really don’t impact the performance at all (as far as the gpu is concerned, a second plane is just a different surface normal) unless you want recursive tracing. And of course it is the only truly general way to handle non-planar reflections, which again don’t really add to the cost significantly.
On the other hand, you can control exactly which pixels generate reflection rays. If you have a street with a lot of little puddles, ray tracing will be pretty efficient, because objects not directly visible in those puddles do not need to be considered in visibility testing. With planar reflections, those would have to be transformed and then rejected by a stencil buffer. If you have two puddles on opposite corners of the screen, planar reflections has to submit draw calls for objects spanning the entire screen.
You also have fine grained control over sample density. Maybe for rough surfaces you only trace one ray for every 2x2 pixel block, using the normal from one of those pixels at random and then accumulate using temporal antialiasing for instance. UE4 already does this sort of thing for some expensive effects like capsule shadows.
In essence, you get the performance scalability and flexibility of screen space reflections with the accuracy of planar reflections, at a higher base cost. If I were doing a flat mirror on a wall, I would probably still use planar reflections as it is going to create a tight fit reflection frustum, and there is no reason you can’t use both, just as you can use planar and screen space together.