Actors size difference influences AI sight perception?

So I have a BehaviorTree set up in such a way that the Ai controlled Pawn moves to the player location when the percivedactors (sight sense) matches the player pawn, and stops moving to the location when the players perception last sensed stimuli (sight in this case) is not successfully sensed (that is, when the player doesn’t receive the information that he is being seen).
This works while the player is moving in the sight range of the AiPawn, but soon as the player stops he is no longer successfully sensed.
I have the AIPerceptionStimuliSource set on the player.

The player pawn is relatively small compared to the AiPawn, I mention this because when I put a larger static mesh on the player the problem goes away, and the player is rightfully successfully sensed, when moving or not.
I can’t see whats going on using the AiDebuger, but it seems that the height difference is whats causing the discrepancy, can this be true? And is it a bug, or a feature of the AiPerception?

Can you show an image of your blueprint, how exactly you are processing the perception updates.

It is called on the service node of the BTree, player pawn is set as enemy actor, if seen, but as I said the successfully sensed bool on the aistimulus is false if the player is not moving.

Hmm. I assume you only have the sight sense set up and no others, since you’re not checking which sense the stimuli is from? If so, not sure why it would be going false.

Have you tried using the gameplay debugger to get a visualization of what is happening? That would be my first step.

Yes, I only have the sight sense set up, I checked with the Ai Debug and when the pawn is without the added static mesh it is not being registered while standing, I scaled down the added static mesh and the pawn can still be seen, so its not the height difference (would be strange if it was)…

Could it be that my player pawn being a vehicle is causing the behavior?

Don’t see why to be honest, I wouldn’t have thought the perception system would care about anything like that really. Maybe the calculated actor bounds are having some effect, since they would change as you changed the mesh, but I don’t know why they would affect perception.

Only other thing I can think to check is the pivot point of your player mesh. If the pivot is at the bottom of the mesh, it could be being blocked by the ground. You could try replacing the mesh with a basic sphere and see if that makes any difference.

I think you are right about the pivot point, unreal vehicles pivot points are all close, if not under, the ground, when I spawn the player in air it is seen, as soon as it reaches the ground it is not.
I didn’t know the pivot point made a difference to Ai perception sight, thanks for your help.

My 2 cents here: AI perception’s sight is influenced by your “senser’s” eye height. If you’ve put AIPerception on a Controller (AIController), it will go & check its controlled pawn’s eye height for the line traces it needs to do in order to detect things in the game world. Likewise, if you’ve put your AI perception component on the pawn itself (which is discouraged btw), it’s going to check the pawn’s pre-configured eyesight value. So make sure that your pawn’s eyesight is set up correctly.

Also, the AIPerception component only seems to update sight-related info upon sensing / un-sensing something, so you won’t be getting continuous updates after the initial sensing had occurred (via OnPerceptionUpdated and/or OnTargetPerceptionUpdated).

Take a look at AIPerceptionComponent.cpp’s line 359 where you’ll see how the perception system is getting the start location for doing its line traces later:

void UAIPerceptionComponent::GetLocationAndDirection(FVector& Location, FVector& Direction) const
{
	const AActor* OwnerActor = Cast<AActor>(GetOuter());
	if (OwnerActor != nullptr)
	{
		FRotator ViewRotation(ForceInitToZero);
		OwnerActor->GetActorEyesViewPoint(Location, ViewRotation);      // ----> THIS RIGHT HERE!
		Direction = ViewRotation.Vector();
	}
}

Then, if you look at Controller.cpp’s line 533, you can see how the controller is obtaining the eyes’ viewpoint by forwarding the call to the pawn being controlled:

void AController::GetActorEyesViewPoint( FVector& out_Location, FRotator& out_Rotation ) const
{
	// If we have a Pawn, this is our view point.
	if ( Pawn != NULL )
	{
		Pawn->GetActorEyesViewPoint( out_Location, out_Rotation );
	}
	// otherwise, controllers don't have a physical location
}

Therefore, it’s important to configure your pawn’s eye-height correctly, as otherwise the ground (landscape) may block your perception system’s line traces from reaching other actors:

216439-pawn-eyeheight.png

Note that as an edge-case you may also discover that your perception component’s OnTargetPerception and/or OnPerceptionUpdated events get called very frequently, with the “Successfully Sensed” boolean alternating between true and false. This is also a result of incorrect eye-height settings.

Hope this helps!

Thank you so much! Was looking to just slightly increase the height of the AI’s vision- this worked like a charm!