Widget input: It SMELLS

In short UUserWidget comes with a ton of methods which are called when a key is pressed:

	virtual FReply NativeOnKeyChar( const FGeometry& InGeometry, const FCharacterEvent& InCharEvent );
	virtual FReply NativeOnPreviewKeyDown( const FGeometry& InGeometry, const FKeyEvent& InKeyEvent );
	virtual FReply NativeOnKeyDown( const FGeometry& InGeometry, const FKeyEvent& InKeyEvent );
	virtual FReply NativeOnKeyUp( const FGeometry& InGeometry, const FKeyEvent& InKeyEvent );
	virtual FReply NativeOnAnalogValueChanged( const FGeometry& InGeometry, const FAnalogInputEvent& InAnalogEvent );
	virtual FReply NativeOnMouseButtonDown( const FGeometry& InGeometry, const FPointerEvent& InMouseEvent );
	virtual FReply NativeOnPreviewMouseButtonDown( const FGeometry& InGeometry, const FPointerEvent& InMouseEvent );
	virtual FReply NativeOnMouseButtonUp( const FGeometry& InGeometry, const FPointerEvent& 

Instead of using this code bloat I just added an input action through my project settings and used the default input component on the widget to bind that action to a method on the widget. Engine code just pushes those components onto a stack on the PlayerController.

Easy. Or not?

Turns out there are a few challenges because this system doesn’t prioritize or bubble out of the box like the earlier methods do. I solved that. But now I realized that when I type text into a UWidget like an editable text widget the input actions still execute while typing.

Then… WHAT is the point of having an input component on any UserWidget? Are we actually required to override methods like NativeOnKeyDown and just check key X against any possible input action? Can’t be serious?

2 Likes

:coffee:

:eye: :eye: :eye: :eye: :eye:

Thanks for wasting a few days of my time Epic. With the lack of documentation and the layers of junk forming the UI system. This is what I figured out:

Normally you have input actions set up for certain keys. A UserWidget holds an input component which can be added and removed to a PlayerController’s input stack.

PC->PushInputComponent(GetManagedInputComponent());

At first it would seem logical to bind a method on the widget to an input action through this component.

InputComponent->BindAction(INPUTACTIONMAPPING_NavBack, EInputEvent::IE_Released, this, &UMyWidget::ActOnNavBack);

This is not the case because of how input is routed. Input component:

  1. Do not prioritize based on what widget is focused and in what order.

  2. Do only respect a manually set priority if it consumes the input.

  3. Do execute when you type in an editable text widget.

  4. Do use the same stack on the controller as any other component.

We could write an extension to a widget to deal with prioritizing by updating the priority when a widget changes on the focus path. 1, 2 and 3 can not be dealt with at this level. This makes the input component on the UUserWidget effectively garbage. The proper way to implement input seems to be by overriding the UserWidget methods such as “NativeOnKeyDown” and the other relevant methods, which are quite a few. Because we don’t want to hardcode any keys on this level, we would still have to manually compare if a key argument exists in any input action before we can even decide how to proceed.

Tell me there is a better way than this -_- it offends me:

// Input

FReply UThatWidget::NativeOnKeyDown(const FGeometry& InGeometry, const FKeyEvent& InKeyEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromKeyEvent(InKeyEvent), true);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnKeyDown(InGeometry, InKeyEvent);
        }
        return EventReply.NativeReply;        
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnKeyUp(const FGeometry& InGeometry, const FKeyEvent& InKeyEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromKeyEvent(InKeyEvent), false);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnKeyUp(InGeometry, InKeyEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnMouseButtonDown(const FGeometry& InGeometry, const FPointerEvent& InMouseEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromPointerEvent(InMouseEvent), true);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnMouseButtonDown(InGeometry, InMouseEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnMouseButtonUp(const FGeometry& InGeometry, const FPointerEvent& InMouseEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromPointerEvent(InMouseEvent), false);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnMouseButtonUp(InGeometry, InMouseEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnMouseWheel(const FGeometry& InGeometry, const FPointerEvent& InMouseEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromPointerEvent(InMouseEvent), true);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnMouseWheel(InGeometry, InMouseEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnMouseButtonDoubleClick(const FGeometry& InGeometry, const FPointerEvent& InMouseEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromPointerEvent(InMouseEvent), true);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnMouseButtonDoubleClick(InGeometry, InMouseEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}


// Add a sh*t ton of other checks for gestures / voice etc??

Bump

I’m not sure I’m clear on what you are trying to do here, but if you want to change up what inputs you use to confirm/cancel etc. in a widget, I believe Common UI plugin lets you do that.

It’s a free plugin Epic created for Fortnite that they just tossed into the engine if anyone wants to use it.

Uhmm what exactly is not clear? I’m attempting a minimal implementation of widget input on top of the bloated system Epic provides without adding more bloat with CommonUI. I checked that it’s a whole lot of yadayada but a few lines of content, most of which I already wrote as separate modules

Can you give a use case? Is it when a UI is displayed but a hot key also does the same action?

I’ve shipped multiple UE projects and cannot figure out what problem you are describing …

For a use case we can first take a look at how bindings are made on a character or playercontroller. They use the input component to very easily bind an input action to a method which is very clean:

InputComponent->BindAction(INPUTACTIONMAPPING_Jump, EInputEvent::IE_Released, this, &UMyCharacter::ActOnJump);

If you do that on a UserWidget for control user made UI actions such as “increase slider” or “reset options” then you are not using the input routing system specifically present on widgets (you use it by overriding OnKeyDown, OnMouseButtonDown etc. there are a bunch). That routing system ensures that a widget can handle / bubble input, prioritize input and that for example input actions don’t execute while you are typing in an editable text UWidget. The problem is that this is incredibly messy to implement, you have to override the methods given in my example, then test if a pressed key is actually in an input action, then perform a method. Why compare it to an input action? Because I don’t hardcode my keys, they are rebindable.

2 Likes

Fixed it by overriding the methods such as OnKeyDown, pulling the input through a single method where keys are compared to input actions, and the name of input actions are matched with function pointers. Still smells, but that’s just part of the engine.

2 Likes

Hi, I am on 5.4.

Looking at UUserWidget code there exist some functions called “ListenForInputAction” “StopListeningForInputAction” and “StopListeningForAllInputActions”, problem is that those seem to handle old InputAction system, not the new EnhancedInputAction.

So why is EnhancedInputAction the new input method when it has not been properly implemented for widgets in C++. However in Blueprint you can add these enhanced actions to the widgets, not clear for me which mechanism is equivalent to that in C++.

We can no longer use these listen for input actions because bindings are more complex, they include ETriggerEvent, etc. Moreover new features like chorded actions are going to need some code to be overriden in Native functions for navigation, key press in order to not bubble up when chorded etc. Also these Native events consume input and are always before/stopping enhanced input events.

Sadly instead on focusing on flexibility by deeply understanding engine internals (documentation would help), plugins like commonUI (and epic) are focusing on simplicity by forbiding things that could be achieved. So in the end you are alone against engine UI and still needing to override and write your own code in complex scenarios like input set to game and UI etc.

I mean disabling at engine level the ability to fine-grain control when mouse cursor is shown or hidden does not seem quite professional… The bShowMouseCursor and the bEnableMouseOverEvents flags are there in C++ and blueprint but have no effect in UI modes ¿?¿?¿¿

Or forcing UIOnly mode in CommonUI plugin, when you need an UI to work in game and UI mode…

At the end of the day I decided to do as the engine developers want, always show mouse in UI modes and not to use commonUI and write my own solution for gamepad navigation based on game and UI mode and enhanced input actions

But I would not say it smells, it seems to lack documentation only and some explanation and reasoning about why these decisions are taken.