Upside-down refraction without using ray tracing?

Is it possible?

A glass sphere should refract things upside-down, like this:
image

Simply inputting negative value into IOR doesn’t work. It looks like this:

Is there a trick to make refraction upside-down without enabling raytracing?

You could write your own in material refraction instead of using the default refraction. Thankfully you can read the in engine’s shader code for the refraction to get the exact formulas they use. The only tricky part is you need to make some adjustments for screen resolution, aspect ratio, etc.
Instead of sampling the scene directly and then applying the refraction, you’d need to flip it across it’s axis and then apply an offset (so that it’s sampling roughly the same area of the scene as before it flipped).
Imagine your ball is in the bottom right of the screen, if you simply flip the axis before refracting, it will sample what used to be in the top left. Thus you need to flip it, and then shift it. You can shift it all the way, or just always align the refraction to sample the center of the screen to reduce the likelihood of sampling off screen which will cause artifacts.

Another downside is that an in material refraction shader can’t do cumulative refraction - in other words if you place two transparent objects in front of each other, instead of seeing the combined refraction of both, the second object will be invisible due to scene sample omitting transparent objects.

I appreciate the reply, however all this is way over my head, I’m afraid :smiley:
As a 3d artist, I mostly just work with modelling and texturing. “Reading and writing” shaders is way beyond my area of expertise. I can only do stuff in the material editor in unreal. I was hoping there’s some kind of node or math that could’ve worked in there… But it seems not.

The shader can be made with nodes, but you’ll need to understand how to offset the scene sample UV and the basics of refraction math and since it’s a screen space effect you’ll need to understand how to compensate for screen attributes too.

As I mentioned before, all of this can be achieved with nodes, but being able to interpret the formulas in the HLSL will help you in not needing to start from scratch - even if you cant actually write code. Math is math so the formulas can be ripped straight from the shader code and duplicated in nodes.

For example I believe the core of the shader is 1-IOR*(Pixel Normal) transformed into view space. The shader then adjusts for aspect ratio and resolution scaling, plus there’s some extra stuff to compensate for other things like what to do when sampling outside of the screen. Without that stuff, the refraction wouldn’t look consistent across different sizes screens.
But ultimately no, it isn’t trivial.