So I have a scene with a mirror, a character and a monster. I want to be able to do something with the monster when I see it through the mirror, but I encounter a few issues.
Is it possible to detect an actor reflected in the mirror ?
I don’t think so. That’s why I tried to line trace from the character to the mirror, get the view angle and create a BoxTrace from the mirror hit point to the reflected direction (box size = mirror bounds size) and check for the monster. But this has two main limitations:
- I don’t handle the case when I see only a small part of the mirror. I don’t know how to get the size of the mirror part that I can see so the BoxTrace size remains the mirror’s size. In some cases, the monster will be detected but the player is not able to see through the mirror.
- If BoxTrace on the visibility channel with the mirror size, the first blocking object hit will ‘stop’ the trace. So imagine a small cube just in front of the monster between him and the mirror and the boxtrace will stop on the cube.
Do you have any ideas how to handle this properly?
When you say “detect,” what do you mean? Using one of the built-in Unreal AI sensory objects, or using custom code?
You can use a render target, and render the scene to the target from the point of view of the monster, and override materials so that geometry is black, mirror is reflective, and player is green or something. The render target probably only needs to be 128x128 or somesuch small size. Then, check the amount of green in the render target texture. There’s different ways of doing this depending on “where” and “how” you need the data – read-back will stall the pipeline, but perhaps you can use the render target as input to another pass that counts/samples green pixels, or use a stencil test of some sort. Or read it back next frame, so reactions are one frame delayed.
Thank you for your help,
“Detect”: when the player is able to see the monster in the mirror, trigger some action. If possible, it would be basically do some kind of linetrace and check the hit, and do something according to the result.
The render target seems inetresting although I never used it so I need to take a deeper look in the docs. For now I think I will do an approximation and just check if the player is directly looking at the mirror, and do a custom conetrace from the center of the mirror towards the reflected direction.
“some kind of line trace” is super inaccurate. Line trace from what part of the player (camera? head? center?) to what part of the monster? (center? head? limbs?) A line trace is generally super limited when it comes to “accurate” perception.
If you want non-glitchy gameplay based on this kind of effect, I think the render target is the way to go, honestly.
I’ll take a look when I have time then. But as I don’t know how it works, I think your solution is very time consuming. By “some kind of linetrace” i meant a synchronous function to get the result during the same frame calculation, but it surely would not be a problem to get the result one frame later.
What I did so far:
- Divide the mirror width in multiple sections.
- From these points, linetrace to the plyaer on the visibility channel and check the angle with the player fov. This indicates approximately which part of the mirror the player can see.
- From the part of the mirror visible, get the middle M of it.
- Conetrace from M towards the reflected direction on monster channel.
- if the monster was detected in monster channel, line trace from the monster to M on the visibility channel.
- if the last linetrace is true, then it means the player can see the monster.
There are a lot of approximations in there (not taking into account the height of the mirror, linetraces in approximate positions etc.) but it works well enough.
Also note: If the simulation uses “can player pawn see monster,” but the camera is not first-person, then the simulation may behave differently than the player expects. E g, player-camera sees monster, but player-pawn does not: player will go “huh? I can see it!” And vice versa.
I tried to use Render Targets with stencil buffer, but I’m facing multiple issues:
Planar reflection and Stencil Buffer: the planar reflection doesn’t reflect stencil values. i.e. when looking directly at the player, the SceneCapture2D shows the player in green (everything black except object with stencil values of 1 = player), but not when looking at the mirror (player is in normal color).
How to determine the color in the render target texture: how to detect the amount of green for example? do I need to save this as an image and read it afterwards? or are there some options to get the image array from the render target to do manual calculation as well?
Thanks in advance
There are some render counter queries you may be able to use, on the C++ side, but I have not used them myself (only seen them mentioned when reading through the code) so I can’t help with code snippets.
The most straightforward way would be to read back the image and look at it on the CPU; presumably you will read the image from the previously rendered frame, to get some double-buffering going and not stall the pipeline too much.