I’m roughing out how some systems might work in a survival game.
Imagine you are hunting deer. The deer should be able to perceive player and do something when it does. Let’s just consider sight perception only, because any other perception methods work essentially the same.
My initial thoughts and examples I could find all share something in common - they are like a literal simulation. Shooting out tons of linetrace/raycast, in a radius around the AI’s head - a lot of instructions.
I came up with something that works but to expand upon it I can tell it’s going to get ugly fast. Would be good to get simpler.
Suppose a sphere trigger around the ai agent. Player triggers it, send message to the ai agent. Then, ai agent shoots a single raycast to player. If it hits, player is seen. If not, something is obstructing view.
That is the basic idea. Obviously you can pile more gotchas on top, like doing a shotgun blast of rays, or tying the timing of the raycast to coincide with animation.
But the idea is just check if player is in line of sight directly, rather than simulate entire cone of vision constantly.
Anyway, this isn’t a question exactly, just sharing some thoughts because this is new territory for me and I’d be interested how other developers have solved similar problems, or maybe have an opinion/concern about my idea.