Sorry, just catching up on old threads.
So to get input, 1st I added input under Project Settings -> Action Mappings.
I then use those action mappings in my global HUD blueprint (not a UMG blueprint) to catch & distribute the input. For example, I have an input mapping labeled “ToggleJournal” which accepts input from both the “i” key and the “triangle button”. On use, I check if the journal is open, and if it’s not I create the journal widget & add it to the viewport and assign it to a UIJournal variable. If it is open, I grab that variable and remove it from the viewport.
I’m doing similar with “use” interactions. I’m using the global HUD blueprint to manage them. So when the player is near a usable object, I use an interface to tell the HUD blueprint you’re near a usable object, and listens for input on the “use” action mapping. If I detect a use, and the player is near a usable object, I pass along the event to the object via a variable sent to the HUD when the object declared itself to be usable.
Does all this make sense? Basically it’s functional when I need to grab a single button press, but doesn’t work for stuff like navigating a list or using a UI with multiple buttons.