Convert world space coordinates to Capture 2D component view space coordinates

I need to implement a basic augmented display.
For example:
image
There are actors in the scene that are not visible, that I want to show up as icons on my display.
Trackers / nav points / markers, etc…

The problem is that my display in this case is a CCTV style setup, meaning there’s no Camera component, only a couple of Scene Capture 2D components.
Scene Capture components don’t seem to have a translation to view port node, which leaves me at an impasse.

The only thing I’m missing is a way to translate world space to view space for a given Scene Capture component.
I could’ve calculated it myself, but I can’t get the aspect ratio from the Scene Capture component either, and I don’t understand enough about matrices to figure out the calculation that way.

Any help would be appreciated.

Answer is TRIGONOMETRY. Try to build some triangle that you can resolve. Like distance to 2 points and angle or something like that. For that picture it would be great help if you know altitude above road.

I will add picture here for similar problem (however it is entirely inside unreal so i know direction of camera and camera location, and it still has some bug that i cannot find).

ps.
Is this bad quality picture or render from unreal? :smiley:

pps.
This is my quite old solution (from around 4.2 era):

But for some reason it did not work properly now, so i recreated it again.

This is SAME graph, just done recently and since it works for me (camera facing directly down) i did not bother to make it more general.

This will give “exact” coordinates in 3d space for mouse pointer on some flat horizontal surface. For Your case you need to compensate for Z axis of target point.

You solution relies on Get Player Controller, which references the camera, and not a specific Scene Capture component, so it doesn’t work for me, I already tried that.

Also, I’m not using a mouse, I don’t need to project a screen coordinate to world space, I need the opposite.
Projecting a world space to view space.

Something I made in Unity, it’s low quality because it needs to run on Oculus Quest.
What I need done is very easy to do in Unity, which makes this all the more frustrating.

Not quite following you.
I know trigonometry, and the world locations of all objects is known.
But how do I translate that to view space without at least the Scene Capture component’s aspect ratio.
Without the aspect ratio, I can’t calculate the camera frustum.

It is still trigonometry.

You have location of something in 3d space.
You have location of scene capture in 3d space
You (probably) have camera plane (plane normal and distance to camera) that everything is projected on.

So it is possible to calculate 2d coordinates on camera plane from 3d coordinates of object.

For that you could do some test scene where all is nicely aligned in 3d space and calculate all about scene capture virtual camera, together with aspect ratio.

That means it would not be responsive, but that gives me an idea.

Just realized that in my case the markers widget overlays the image that shows the camera view, I could get the ratio of the widget, and assume the same for the camera.
Not a perfect solution, since it doesn’t cover cases where AR entities are visible in only a segment of the display (like in fighter jets), but it will do for now.

I’ll report back after testing.

Almost got it.
• Unrotate the vector from target to camera (for ease of calculation, not needing camera up/right vectors).
• Take the X component of the vector, which is now the distance from the camera plane, and from there, use trigonometry and the FOV of the camera to calculate the extents of the view port in relation to said vector.
Effectively projecting the extents of the viewport to camera space.
• Divide the vector’s Y and Z coordinates by the horizontal and vertical extents respectively.
That should return the coordinates in widget/view space, mapped from -1 to 1.
• [OPTIONAL] Divide by 2, to get coordinates from -0.5 to 0.5.

The horizontal coordinate works perfectly, but for some reason I have an offset for the vertical coordinate, and I can’t seem to figure it out.

I tried scaling the screen to the widget aspect ratio is 1 to 1, and I get some weird behavior.
The Z (vertical) coordinate is half of what it should be, and has what appears to be some linear scaling offset.
The offset would’ve made sense normally, a mistake in calculating the aspect ratio, but when the aspect is 1:1 it just leaves me scratching my head.
The scale issue of the vertical coordinate is a mystery to me.

The camera ratio is not the texture ratio, I checked that.

Here’s the graph so far.
Note the vertical coordinate is multiplied by -1, while the horizontal is multiplied by 0.5.
No idea what’s going on with the vertical coordinate.


Seems to work well when both the render texture resolution ratio and widget size ratio is 1:1.
In all other cases there’s an linear offset from the center.
I’m completely stumped at this point.

Hi,

Have you found a solution for this?