The name pretty much says it all. Here’s a paper on an implementation that someone else came up with - I found it while doing research based on a similar idea, basically just using the depth and screen space normal gbuffers to generate screen space displacement maps, getting around the edge problems involved in parallax occlusion mapping to more accurately simulate/fake extremely high geometry detail. This guy beat me to the punch, and apparently his algorithm is faster than POM since it’s completely independent of scene detail. I lack the programming knowledge to implement this in UE4, but for you guys at Epic I can’t see this taking more than an hour or two to test - it seems very promising. Link: http://www.divideconcept.net/papers/SSDM-RL08.pdf
Technique seems interesting at least. Now I am wondering would it be better apply this before shading so basically modify gbuffer or directly to shading results before transparency. Or just before post processing?
I’m not sure. I would *think *before shading, but I could be totally off on that.
Looks interesting, though I’m willing to bet it’s not possible to get accurate self-shadowing from this technique.
Do you think it would make a noticeable difference? If you applied it as a post-process after self-shadowing, I can’t imagine the minute inaccuracies would be noticeable. I mean, sure, it wouldn’t be as accurate as raytracing, but we are talking about real-time approximations here. If nothing else, I’m pretty confident it could produce the same results as POM at a lower computation cost.
It’s from 2008, I’m sure there’s some reason neither I nor anyone else yet can point to that it’s not used anywhere. Probably another one of those ideas that seems really good in your head until you find that one problem that really messes it up