Ok, so here’s the deal… I’ve been working on a digital trading card game for around 5 months now. It’s going well, but my latest hurdle is trying have a free-roaming 3D environment coexist with a hand of 3D card meshes. Obviously, like any card game, you’d want to have your cards rendered on top of everything else, and NOT colliding with the environment. This isn’t a common issue with other card games, because either everything is done in 2D, or in 3D with a static environment…
So to solve this issue, I’ve created a class that basically operates as an offscreen scene renderer. You feed it an XYZ “Stage Size”, it auto-detects any actors or components within it’s world bounds, and renders them out to a render target texture, and feeds it into an MID for you to assign to a UMG widget. It’s also responsible for handling raycasts, via translating your mouse’s on-screen 2D coordinates into the 3D world-space of the offscreen renderer’s “Stage” bounds. It broadcasts multicast delegates for anything within the actor/component array that it hits.
This looks great, and works OK, but there’s an issue…
The main parts to watch are between 18s - 48s, while testing the edges of Row3_Cone, Row2_Cube and Row2_Cylinder (incorrectly named Row4_Cylinder2 in the video).
As you can see, the perspective projection rendered to the render target texture isn’t exactly the same as what you would see walking around in the world.
The further away (vertically) from the center the object is, and the more depth it contains (or closer to the screen it is), the more inaccurate the raycast becomes.
Setting the capture component’s ProjectionType to Orthogonal is sadly NOT an option, since the entire reason for doing this in the first place was to make the user actually feel like these 3D meshes were really front of them, animations, rotations and all.
Does anyone have any suggestions on how this can be fixed? Please ask for code if you feel you need it.