I’m not sure we’re even really disagreeing about most of this, but it remains my opinion that for translucency in general, including the specific case of raytracing glass, there are other issues that are more significant than this one. I think that those issues should, and probably will, be addressed first.
Why would it fail in those cases? I don’t see why the approach of duplicating geometry would not be applicable either to a baked geometry cache or a live simulation.
Keeping in mind the fact that all production environments are not necessarily created equal, I personally do not find the thought of using some custom logic to prepare data for migration from one piece of software to another to be particularly alarming. In fact, in my own experience, it’s nearly ubiquitous. Again, note that I’m not saying that it’s *desirable, *just that it’s not only possible, but often necessary in practice.
I’m just not sure I agree that this tinted glass thing is a *prime *example of why real time renderers haven’t displaced offline renderers. I think there are plenty of other more pressing limitations involved, to say nothing of the fact that studios tend to have a lot of custom infrastructure built around specific offline renderers that would need to be mostly thrown out and re-created in order to support using Unreal instead. There are also plenty of non-technical challenges, such as training and familiarity, that make it challenging to completely replace a widely-used offline renderer with a real time renderer that most people aren’t yet accustomed to.
We’re saying lots of words at each other, but I’m not sure we’re actually communicating about anything. I’m sure Epic’s developers will be able to formulate an opinion on which limitations, bugs, and missing features should be addressed first without either of our input.