I’m sick of how modern game AIs being too smart, or too stupid, or simply just blind or born with gift of X-ray vision.
I don’t know if this already been done or anything. Just had a serious thought about this the other day, and realized I’d share this with the world. I hope the idea isn’t anywhere a secretly copy-righted blahblahblah.
How I assume normal AI Vision works:
When game AI want to see you, they usually do a ray-trace from the AI’s eye position to your character position, if there is a vision block/collision in between the ray-trace, then it fails, and AI won’t see you. If the ray-trace is successful, then it detects you and start killing you.
This is the solution, and it’s working. But wait until you hide behind a dense foliage, with no collision, but you should be perfectly blocked out by the foliage from the enemy’s sight. And you got detected anyway, how, because the foliage has no collision. Or, there are transparency gaps on the foliage, and the AI ray-trace just incidentally went through the open area of the transparecy, and successfully traced you, and your stealth failed.
The AI happens to recognize you as a threat, behind one centimeter open gap from 20 meters away. No human being can do that. Of course that’s why It’s AI.
But AI are suppose to simulate Human activity, if I am not wrong. X-ray vision isn’t human capability.
So, Pixel/Image based AI Vision system (AI image G-buffer???):
It’s based on a power of 2 resolution image, like a camera. So the when it’s activated, the AI shoots an image or a short video at your character of grayscale image, or pure 2 bits bool 0 and 1 image. So the subject that AI is looking for, will be posed as 1, white, and everything else is posed as 0, Black. So whether in realtime, or post-shot, the image or video consists of 0s and 1s will be calculated/processed. Basically, averaged. And this averaged value can determine if the subject should be visible. 32 by 32 pixels might just be enough for general AI visions.
For example, a player behind foliage, 5 percent of his body is visible to the enemy, ray-trace without collision will obviously detect player, and ray-trace with transparency might detect, and might not. So AI took a shot of image at player. 5% of his body is actually seen, but the averaged image color is just 5%, and the detection threshold is 50% gray, so not detected.
It’s going to cost some performance, so it doesn’t have to always enabled for every AI at anywhere. Just when player gets close to the AI, or specify which AI can snipe from miles away.
No more vision blocking volume. Don’t need it. But you need to make sure all assets are not being culled on the back face(Back faces are considered invisible).
Make use of custom G buffer, so maybe it can also be used as Player’s vision, for custom vision tools like NightVision Google, thermal Google, etc.
nVidia’s Vechicle GPU kinda had the power to do so with 720p cameras, 12 of them? Maybe games can utilize 1/20 of the resolution, and also 12 of them? I don’t know.
Better way to simulate AI with real human vision? maybe.