I’d like to create an interaction system like the image posted below.
In firewatch, the interaction system seems to be context sensitive. I can replicate this to a degree.
So far, I’ve created an interface which I can use to trigger certain events such as opening lights, turning on lights etc.
I’ve also been able to create a HUD which toggles the input action prompt through the widget.
I followed one tutorial that shows creating a text widget which can then have the objects name from an interface function. However this didn’t work for me.
What would be the best way to essentially implement two widgets which change based on the object name and then whether the object is a pick up or door. So it would change to say pick up or open for example.
Create a widget with an Image and Text. Create two variables: Texture2D and Text. Expose them on spawn so that you can assign them when constructing a widget. In Construct (or PreConstruct to have an editor preview), assign the variables to the Image and Text.
Create a new Enum named EInteractionType.
Open the enum and add a couple of types, like “Doors,” “Pickup,” “Talk,” etc.—whatever you need.
(I also like to add “None” as the first element to handle some unexpected behaviors.)
Create a new Interaction Interface, then create a function inside, like GetInteractionType, with the EInteractionType enum you created as an output.
Now, in a blueprint you want to interact with, add your Interaction Interface.
Implement the GetInteractionType function and set the type you want for this blueprint.
The last thing is to call this function. It’s up to you how to handle it. One way would be to cast a ray from the player’s camera and check if it hits an actor with the Interaction Interface (using the DoesImplementInterface node). If so, then call GetInteractionType and create the Interaction widget, populating its exposed variables with the Texture and Text you want for the given type.
Of course, there are hundreds of possible approaches—this is just one of them.
A couple of notes:
If you have “None” as the first enum element, you can use it as the default output of GetInteractionType. That way, when you add the interface but forget to implement it, you’ll trigger the “None” type, which will be cleaner to debug.
You can add a TriggerInteract (or something similar) function to the interface to actually trigger interaction in the target blueprint when the input is triggered.
In the future, when you feel confident, instead of using the enum, you can create a Data Table, which would store not only the type name but also the Texture and Text at the same time.
Well I have the same system even a better one and here is an additional breakdown of approach in addition to @lordlolek 's
1- InteractionActorComponent: Create a component with all variables you need for interaction. Such as InteractionState(Active, Inactive, Depleted), IntName,IntAction (use pickup, i use string to be flexible however you can go with enums),ntID, bIsShown, InteractionDistance (close,normal, far) etc.
2- InteractionScanner: Create a scanner, like a sphere collider that detects and strips the InteractionActorComponents in a radius. You can use a custom collision channel too if you prefer. Interaction scanner does 2 things: (1) Decide which interaction in radius should be shown to player. Distance, angle, interactions state matches , if there is an object in between etc… (2) Detect how player interacts with it, is button pressed etc for that interaction.
3-InteractionProcessor : I use a subsystem for it but can be something else. Knows which interactions are available for a player at a gameplay moment. Bridges the communication between player and interaction component. EX: player presses button, subsystem catches press event and calls to interaction component to do its job, component has an event as interacted and depending on the component settings, opens a door, pulls a lever, opens a dialog etc.
Also I do UMG operations over here to show a designated ui for that player with whatever interaction is prioritized and only umg ticks for visibility reasons.
You can do context sensitive action here after interaction event received what animation etc player should make since we know its interacted, who interacted and what interacted with. If you implement a socket or a box into actor component you can tell animation invkinematics hand/finger to that location.
This way, interaction doesn’t care about what is being interacted. If something is interacted its between InteractionActorComponent and whatever its listening its commands, like a door.
Couple of advantages of system.
Its an agnostic system so can be extended, scanner can scan many more things if wanted around player.
Since its independent, its much more easier to use and integrate
Much more easier to localise.
Can have different interaction conditions, like player is drunk you can lower interaction distance etc
Much more flexible interms of interaction control, conditions like, no interaction behind a glass wall, interaction through a fence, no interaction behind the object,
Multiplayer control, if object interaction by one player put state inactive so other clients can’t interact etc. or can interact.