Make ReflectionVector ignore camera distance

Hello everyone!
I’m pretty new to materials and I have a following question:

How do I make ReflectionVector not scale with camera distance?

To make it easier to understand I would like to moving camera away from object look like this:

and NOT like this:

I am trying to achieve that for nearly whole day and I’m not even sure if is it possible as I couldn’t find anything to help me. I will be very grateful for any help.

I think you may be misunderstanding what a reflection vector actually is. Imagine a ray is shot from the camera and lands on the surface of your mesh and bounces off. The direction of the ray as it bounces off is the reflection vector and it is a result of both the camera vector (and thus the cameras position in a perspective projection) and the surfaces normal vector.
That said, the same math can be done with any arbitrary vector.
There is a node “custom reflection vector” which allows you to easily calculate what the reflection vector would be for any chosen vector, instead of the true camera vector.

Thank you for responding!

I know how reflection vector works I just used this as an example. In my case I can use camera vector either. I was thinking about some math manipulation using e.g. distance between camera and actor.

I tried using custom reflection vector node but I don’t know how can I get vector of different camera (not player character one).

If you know how I could do that I would really appreciate any help!

What effect are you trying to achieve exactly? The distance effects the vector not directly because of the position, but because of the field of view of a perspective camera. Pixels on the edge of a camera will hit a surface from a slightly different angle than those at the center.

This is not true for an orthographic camera, where all the view rays are parallel. Thus, plugging in a constant vector 3 to the custom reflection vector is essentially like creating an orthographic camera facing the chosen direction - the absolute position is arbitrary. This could also be compared to directional/sunlight - the position doesn’t matter, only the direction.

You could also take the difference between the absolute world position of the mesh and any arbitrary point in world space. Normalize this vector and it would give you the direction between those two points. This would be the opposite extreme, where all view rays converge at one point in space. Direction would be irrelevant and only position would matter. This could also be compared to a point light.

Either of these would probably look quite bizarre if viewed from a direction or position that differed from the source vector. It would be kind of like viewing 3D chalk art from the wrong spot.
image
Also worth a mention is the paradox of the backside of an object. If I am viewing it from the front and you are viewing it from the back (but are seeing a reflection vector based on my point of view), then what is the reflection vector of a point I can’t actually see? In theory there is no reflection vector if a view ray can’t reach that pixel from my “camera” but technically one could be calculated as though it had.

Things stop working once the angle of the surface that is struck vs the view ray is greater than 180°. Normally it would be impossible for such an event to occur. If we go back to our light example, cameras are like spot lights. They care about direction and position. If you shine a light on an object, no direct light will reach the backside.
So your secondary observer would just see darkness.

For the fun of it…


Here’s what it looks like when using the world position method I mentioned. Our secondary observer is viewing the re-projected reflection from the side angle of the primary observer. The right hand side is pretty normal, but as the view ray becomes parallel to the surface, things get really stretched. Once we pass the physically possible view, we can see that the reflection vector gets flipped and now our scene is upside down.
image
Here is the POV that particular reflection vector was calculated from and the only angle it looks physically correct. When viewed from the opposite and impossible side it looks like refraction. A dot product of our imaginary POV and the vertex normal would allow us to mask out the impossible views. Here’s what that complete setup looks like…

So I’m trying to do seamless portals, look:

and the problem is that portal looks just weird when your are not right in front of it.

The method you mentioned in the next post might work, but how do I set POV parameter? I know that I can use Material Parameter Collection but how do I get required value from camera? Maybe camera forward vector?

It’ll never look right with just a normal static scene capture like that. A cube capture will look significantly closer because it uses 3D texture coordinates but even basic cubemap projection isn’t going to look totally seamless as a portal. If you want to project a captured cubemap to the view, then you just use the camera vector for the texture coordinates. You don’t need to take the surface normal of a portal into consideration because it doesn’t have a surface, so the light isn’t reflecting or refracting, just passing through.
You would need to recapture your scene every frame on the other side of the portal to make a truly seamless portal. This is because even if you can line up the projection, it is usually still obvious that it’s a flat image, as there is no parallax. Thus creating the effect is done in programming the secondary camera position properly to capture the view, not the material.

What I’m saying is, your problem isn’t the material. It’s that your second camera is stationary.

I’m already using scene cube capture.

I created this topic hoping that I wouldn’t have to render every frame twice because it’s extremely performance heavy. So I assume there is no other workaround.

I will probably leave this portal just like it is or I will add some post process effects to hide this issue.

Anyway, thank you for your help, I’m REALLY thankful for your time and effort!

Ahh, well then I think I see where things are going wrong then and what you’re really trying to acheive.

While it’s true that it’ll never be perfect for the reasons I mentioned - it can be improved dramatically from where you’re at. What would help the most is grounding the cubemap in world space. This also has issues but I think they’ll be visually less jarring. They will fix the issue of it feeling like the cubemap is floating off in infinity and look much better.

One really simple way to do this is to add the “Interior Cubemap” node. This will project the cubemap to the interior of a fake cube. But sometimes the box shape can be a bit too obvious.

There are other ways to achieve a similar effect but they all have their flaws. You could project to the inside of a fake sphere instead.

SphereProject

There isn’t a native sphere interior mapping function to my knowledge * but we can hijack the “RayTracedSphere” to achieve a close end result. Here’s an example of that in action. As you can see there’s a bit of distortion but it kind of looks like I’m looking into a 3D space.


The offset allows you to shift the sphere relative to the portal object. Radius determines how large the sphere is. It needs to be at least as large as your object for a convincing result. If you make it too big, the skybox effect will return. The reflection vector is what tricks the sphere trace into looking inside out. Lastly that UVW is fed into our cubemap. Throw in some special effects to hide the flaws, and you’ll be looking pretty good.

*I take it back, I remembered there is one called “DistanceLimitedReflections”. Much like my example it raycasts a sphere. Results look the same to me and mine seems to cost a few less instructions… Anyway, hope this helps.

Thank you! This method works really well. Now it will be much easier to hide remaining artifacts.

I really appreciate your help!

1 Like

Is it possible to apply a rotation to the sphere (HDRI)?