PawnSense / AI Perception orientation

I’ve been looking into adding sensing of a player character into several AI figures in our project, by way of blueprints (if for no other reason than rapid prototyping).

I’d like to get the sensing to be oriented to a particular bone or socket for the animation being used.

I started with PawnSense even though I’m aware it’s intended to be phased out.

However, I do not appear to be able to change the socket it should follow (some complaint about not being able to change it on inherited components). If it’s meant to follow a socket of a particular name, it’s not clear what it should be. It doesn’t appear to be following a particular bone. It doesn’t appear to have a transform value I can programmatically alter at the blueprint level. It doesn’t appear to have a way for me to programmatically change the socket to follow. I can’t attach it to some sort of secondary scene component that does have a transform. Short of attaching it to an entirely separate actor that I can invisibly sit at the first actor’s eyes, I don’t seem to have a way to alter the orientation, and that solution seems like swatting a fly with a Buick.

I went to check out the AI Perception system and seem to be running into similar problems. So I thought I’d stop spinning my wheels and post here.

What is the canonical way to reorient one of the two perception systems? Part of my problem is to compensate for skeletons that are oriented strangely in their animations relative to the actor’s orientation, but it’d be nice to have the perception move about with the head, eyestalk, floating nimbus cloud, whatever.

Advice?

(I realize I may gain more control by abandoning blueprints and heading to C++ land, but I thought I’d ask about the blueprint side anyway… any advice for either language would be appreciated).

1 Like

Hey Michael,

There’s currently no BP way to affect perception listener’s location/rotation. In C++ land all you need to do is to override AActor::GetActorEyesViewPoint, that’s what AI perception is using to determine where AI’s eyes are.

If you wanted to go independent from the Actor owning the component you’ll have go do to AIPerceptionComponent and make UAIPerceptionComponent::GetLocationAndDirection virtual and then implement whatever you want in your own perception component extension.

Regarding PawnSensing, not a clue, we don’t support this one anymore (as you are aware).

Cheers