Detect if given actors are visible in shape-part of the screen

Hello there, very new programmer with unreal engine here !

First of all, I’m sorry if it isn’t in the correct category for this question, and since I am prototyping it in blueprint I thought it would be the most adequate place.

So I am trying to simulate a missile launcher and for this I need to have a small shape in the center of the screen (let say a square) and be able to detect given objects (list / tags / channels or whatever) if they are visible in that shape.
Since it is screen-based (with perspective) it cant be a shape-traced square, it is more about a cone and kind of joins the cone-tracing problem.

I had some ideas but none of them seems to solve the problem adequatly :

  • iteratively large Sphere tracing
    → how to handle occlusion ?
    → long distance might bevery computing itensive

  • Angle and distance is-in-cone checks
    → how to handle occlusion for the whole mesh ?

  • my actual solution which i am trying to implement but will re-render the scene
    → render efficiently the scene in another render target / camera
    → try to render objects without lighting, anti-alisaing, etc…
    → associate to every object a unique color
    → get all colors in the render target and fetch the associated actors

I also tried to look for channel based cone-triggers but i didn’t find anything.
But it feels strangely difficult for something that seems to be related to the ai-detects-if-it-sees-you, maybe because I am seeking more precision and distance ?

Do some of you have idea for better solutions ?
I’d really appreciate it !

Well, it definitely doesn’t need to be cone-shaped :slight_smile:

For it to be an intuitive aiming experience, you need to know which actors can be seen in the target square.

Surely a box trace would be a way to that?

I also tried to look for channel based cone-triggers but i didn’t find anything.
But it feels strangely difficult for something that seems to be related to the ai-detects-if-it-sees-you, maybe because I am seeking more precision and distance ?

Isn’t it all just a dot product? I would like to see it explained with a picture, if possible.

So I tried it with a boxTrace to show the issue I have with it :
(the green crosshair and yellow square are of equal screen size)
On the left is a close object detection
On the right is a farther object detection
What I am trying to show is that the further away the box trace sweeps, the smaller the box is relative to the screen center.
Under the two images I added a visualisation of what I would like to do


Incredible paint skills :smile:

My goal is that i can detect in a part of the screen (for example around the crosshair, any object inside whatever the distance !

1 Like

Yes but I also need to handle occlusion if there is a building for instance !
And if half of the mesh is visible / occluded, linetracing for visibility check will not always be accurate depending on the situation.

Right, I have a better idea now.

The further you get from the cross-hair, the larger the target area in world terms.

There is no cone trace, so I think you have to do something crafty. I’m not quite sure off the top of my head, but it’s going to involve at least 4 line traces from the corners of the cross-hair. The way, at least you know the area you’re focusing on. I’m not sure if you have to ‘splay’ the line traces, but if you do, it will be by a constant value.

Might be an idea to do some experiments in that sort of area…

I don’t have an answer, but the volume you’re trying to search through in World Space is called a “frustum”, and maybe the Environmental Query System could help you? It’s more general than you need, but my idea is you’d have an AI component positioned at the camera, and ask it if it can see anything along several lines of sight matching the view area you want to detect in.

Have you tried this?

3 Likes

If I understood the way the “Was[Actor/Component]RecentlyRendered” work, it returns true if the [Actor/Component] was rendered by the pipeline at most [time] ago.
However I can’t choose a camera from which it has been rendered, I suppose most likely because it is specific to the graphical pipeline rather the camera.

While it is a great solution that could tell me if it rendered is in the screen, from what I jave understood and tested it doesn’t indicate me if it is occluded or if it is only rendered in the shape in the center of my screen.

IDK if using collisions will be a good option of you want to detect occlusions.

The other option I can think of to get what you want is to start shooting rays per pixels or group of pixels from the camera and check for hit actors???

That’s what we started with… :wink:

1 Like