Does anyone know if there is some easy way to obtain an object’s screen position from within the material editor(without using a blueprint to MPC to solve)? Currently, the transform position node only transforms to world space.
For example, if the object were in the top left, it’s whole color would be 0,0
If it were in the middle, it would be 0.5,0.5
Bottom right, 1,1
And so on…
Right now, I’ve been trying to use all kinds of trig to solve it by taking the object position, the camera position, camera forward vector, FOV, etc, to triangulate it all, but it’s becoming a mess and I’m apparently a bit rusty on it all.
Turned out to be something ridiculously simple: When biasing, I had a dyslexia moment where I was adding the 0.5 before multiplying by 0.5. It was giving me weird issues when zooming and that would probably explain why. @stororokw, I ended up coming up with an almost identical solution (minus the goofing on biasing and I did some of it with nodes). I caught my mistake looking at your code and resolved it.
Thanks for the help guys. Marking this one as solved!
I tried your method but it’s not working for me in ue5
[SM6] /Engine/Generated/Material.ush:2681:41: error: no member named ‘WorldToClip’ in ‘(anonymous struct at /Engine/Generated/UniformBuffers/View.ush:551:14)’
float4 Clip = mul(float4(In.xyz,1),View.WorldToClip);
~~~~ ^
Well. I had the same issue. This was my solution. Works well for spheres at least, which is all I was going for. Imagine it wouldn’t take much to adapt the basic logic to more complex shape shaders. Just have to use coordinates other than the object position based on the part of the complex object being shaded. Or something like that. Anyway, hope it helps.
I was surprised to learn that Transform Position node doesn’t have Clip Space option.
But there is a node TransformToClipSpace that’s implementing custom code everyone’s mentioning above.
Yeah, it looks like it’s doing stuff in a way that’s compatible with LWC, which makes sense.
I’m kind of laughing right now because I was actually needing something like this again, so the timing of this post was great. But the real ironic part is that I guess I was the original creator of this thread many years ago and I completely forgot about it…
EDIT AGAIN: I guess I was trippin and didn’t hook up the object position to the world position input lol, yeah, that transformtoclipspace node works.
Code for the simple test that just turns red/green in the x/y channels if they are near the edges(the wiggle room is needed because if it’s set to compare to 0 or 1, the whole object is off the screen and you won’t see it change colors)
You can move the camera around, zoom in and out and so on and it definitely does work as intended. It does report just the X,Y value of the object, which is exactly what I needed, minus the headache of trying to do it by hand. I also verified that it works with FOV as well, so if you set it high or low, it will still map correctly. Screen percentage/resolution scaling also works as well.