Widget input: It SMELLS

In short UUserWidget comes with a ton of methods which are called when a key is pressed:

	virtual FReply NativeOnKeyChar( const FGeometry& InGeometry, const FCharacterEvent& InCharEvent );
	virtual FReply NativeOnPreviewKeyDown( const FGeometry& InGeometry, const FKeyEvent& InKeyEvent );
	virtual FReply NativeOnKeyDown( const FGeometry& InGeometry, const FKeyEvent& InKeyEvent );
	virtual FReply NativeOnKeyUp( const FGeometry& InGeometry, const FKeyEvent& InKeyEvent );
	virtual FReply NativeOnAnalogValueChanged( const FGeometry& InGeometry, const FAnalogInputEvent& InAnalogEvent );
	virtual FReply NativeOnMouseButtonDown( const FGeometry& InGeometry, const FPointerEvent& InMouseEvent );
	virtual FReply NativeOnPreviewMouseButtonDown( const FGeometry& InGeometry, const FPointerEvent& InMouseEvent );
	virtual FReply NativeOnMouseButtonUp( const FGeometry& InGeometry, const FPointerEvent& 

Instead of using this code bloat I just added an input action through my project settings and used the default input component on the widget to bind that action to a method on the widget. Engine code just pushes those components onto a stack on the PlayerController.

Easy. Or not?

Turns out there are a few challenges because this system doesn’t prioritize or bubble out of the box like the earlier methods do. I solved that. But now I realized that when I type text into a UWidget like an editable text widget the input actions still execute while typing.

Then… WHAT is the point of having an input component on any UserWidget? Are we actually required to override methods like NativeOnKeyDown and just check key X against any possible input action? Can’t be serious?

3 Likes

:coffee:

:eye: :eye: :eye: :eye: :eye:

Thanks for wasting a few days of my time Epic. With the lack of documentation and the layers of junk forming the UI system. This is what I figured out:

Normally you have input actions set up for certain keys. A UserWidget holds an input component which can be added and removed to a PlayerController’s input stack.

PC->PushInputComponent(GetManagedInputComponent());

At first it would seem logical to bind a method on the widget to an input action through this component.

InputComponent->BindAction(INPUTACTIONMAPPING_NavBack, EInputEvent::IE_Released, this, &UMyWidget::ActOnNavBack);

This is not the case because of how input is routed. Input component:

  1. Do not prioritize based on what widget is focused and in what order.

  2. Do only respect a manually set priority if it consumes the input.

  3. Do execute when you type in an editable text widget.

  4. Do use the same stack on the controller as any other component.

We could write an extension to a widget to deal with prioritizing by updating the priority when a widget changes on the focus path. 1, 2 and 3 can not be dealt with at this level. This makes the input component on the UUserWidget effectively garbage. The proper way to implement input seems to be by overriding the UserWidget methods such as “NativeOnKeyDown” and the other relevant methods, which are quite a few. Because we don’t want to hardcode any keys on this level, we would still have to manually compare if a key argument exists in any input action before we can even decide how to proceed.

Tell me there is a better way than this -_- it offends me:

// Input

FReply UThatWidget::NativeOnKeyDown(const FGeometry& InGeometry, const FKeyEvent& InKeyEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromKeyEvent(InKeyEvent), true);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnKeyDown(InGeometry, InKeyEvent);
        }
        return EventReply.NativeReply;        
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnKeyUp(const FGeometry& InGeometry, const FKeyEvent& InKeyEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromKeyEvent(InKeyEvent), false);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnKeyUp(InGeometry, InKeyEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnMouseButtonDown(const FGeometry& InGeometry, const FPointerEvent& InMouseEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromPointerEvent(InMouseEvent), true);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnMouseButtonDown(InGeometry, InMouseEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnMouseButtonUp(const FGeometry& InGeometry, const FPointerEvent& InMouseEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromPointerEvent(InMouseEvent), false);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnMouseButtonUp(InGeometry, InMouseEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnMouseWheel(const FGeometry& InGeometry, const FPointerEvent& InMouseEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromPointerEvent(InMouseEvent), true);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnMouseWheel(InGeometry, InMouseEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}

FReply UThatWidget::NativeOnMouseButtonDoubleClick(const FGeometry& InGeometry, const FPointerEvent& InMouseEvent) {
    if (CanFindAndExecuteInputAction()) {
        FEventReply EventReply = FindAndExecuteInputAction(UHIDUtils::GetInputChordFromPointerEvent(InMouseEvent), true);
        if (!EventReply.NativeReply.IsEventHandled()) {
            EventReply.NativeReply = Super::NativeOnMouseButtonDoubleClick(InGeometry, InMouseEvent);
        }
        return EventReply.NativeReply;
    }
    return FReply::Unhandled();
}


// Add a sh*t ton of other checks for gestures / voice etc??

Bump

I’m not sure I’m clear on what you are trying to do here, but if you want to change up what inputs you use to confirm/cancel etc. in a widget, I believe Common UI plugin lets you do that.

It’s a free plugin Epic created for Fortnite that they just tossed into the engine if anyone wants to use it.

Uhmm what exactly is not clear? I’m attempting a minimal implementation of widget input on top of the bloated system Epic provides without adding more bloat with CommonUI. I checked that it’s a whole lot of yadayada but a few lines of content, most of which I already wrote as separate modules

Can you give a use case? Is it when a UI is displayed but a hot key also does the same action?

I’ve shipped multiple UE projects and cannot figure out what problem you are describing …

For a use case we can first take a look at how bindings are made on a character or playercontroller. They use the input component to very easily bind an input action to a method which is very clean:

InputComponent->BindAction(INPUTACTIONMAPPING_Jump, EInputEvent::IE_Released, this, &UMyCharacter::ActOnJump);

If you do that on a UserWidget for control user made UI actions such as “increase slider” or “reset options” then you are not using the input routing system specifically present on widgets (you use it by overriding OnKeyDown, OnMouseButtonDown etc. there are a bunch). That routing system ensures that a widget can handle / bubble input, prioritize input and that for example input actions don’t execute while you are typing in an editable text UWidget. The problem is that this is incredibly messy to implement, you have to override the methods given in my example, then test if a pressed key is actually in an input action, then perform a method. Why compare it to an input action? Because I don’t hardcode my keys, they are rebindable.

2 Likes

Fixed it by overriding the methods such as OnKeyDown, pulling the input through a single method where keys are compared to input actions, and the name of input actions are matched with function pointers. Still smells, but that’s just part of the engine.

2 Likes

Hi, I am on 5.4.

Sadly instead on focusing on flexibility by deeply understanding engine internals (documentation would help), plugins like commonUI are focusing on simplicity by forbiding things that could be achieved. Like forcing UIOnly mode in CommonUI plugin, when you need an UI to work in game and UI mode…

Or disabling at engine level the ability to fine-grain control when mouse cursor is shown or hidden, this does not seem very professional… The bShowMouseCursor and the bEnableMouseOverEvents flags are there in C++ and blueprint but have no effect ¿?¿?¿¿

So in the end you are alone against engine UI and still needing to override engine code with your own in complex scenarios.

But I would not say it smells, it seems to lack documentation only and some explanation and reasoning about why these decisions are taken and what can you modify in the engine to get the things working the way you want.

1 Like

I’m (very soon) releasing my own series of plugins, including UI improvements which work with the “old” UI system. There simply wasn’t any need for CommonUI or EnhancedInput. There was a need for proper documentation, improved workflows, fixes and additions, which I offer.

A month after this post I write my own C++ solution for gamepad cardinal navigation on game and UI mode, using enhancced input (just the code that fits my needs). I control the focus at all time and do not lose track of it. It is the best I could have done, learned a lot in the process and no longer depend on any plugin. I can say that the code for slate and UMG is rock solid when properly understood and works really well. I hope in future versions they maintain how things are done right now and just make an effort in providing better documentation.

Well… I don’t agree on that point

EPIC, please stop breaking Slate and UMG in new releases.

Yeah I ran into the same interrogations back then.
I ended up making a simple helper utility like this

static bool IsKeyMappedToAction(const FKey& InKey, FName InActionName)
{
    const UPlayerInput* PlayerInput = GetDefault<UPlayerInput>();
    for (const auto& Action : PlayerInput->ActionMappings)
    {
        if (Action.ActionName == InActionName && Action.Key == InKey)
            return true;
    }
    return false;
}

Which I then used directly in NativeOnKeyDown

FReply USomeMenu::NativeOnKeyDown(const FGeometry& InGeometry, const FKeyEvent& InKeyEvent)
{
    if (ULib::IsKeyMappedToAction(InKeyEvent.GetKey(), "ToggleSomeMenu"))
    {
        Close();
        return FReply::Handled();
    }
    return Super::NativeOnKeyDown(InGeometry, InKeyEvent);
}

I haven’t touched these things since then though, I wonder how it has evolved.

Lately I’ve been seeing more and more UE games ship out with lots of reconfigurable bindings, including fully reconfigurable UI navigation bindings, which was rarely ever the case before. Those config menus seem to suggest they are using input mapping contexts, so I’m guessing EnhancedInput probably had large contribution in this evolution. How does this whole thing work with EnhancedInput nowadays ? Can you just assign a mapping context to the UI, just like you assign a mapping context to a Pawn ?