Mutually exclusive input actions, which approach sounds right?

To get back into Unreal engine after some years of abstinence, I am prototyping what – for the purpose of this question – can be thought of as a simple model viewer slash editor. Imagine a 3D mesh you can orbit your camera around, pan and zoom, but you can also temporarily use tools that allow you to change parts of the mesh with the mouse.

Now, I am wondering how to best structure the input for this. There are the camera controls, some tools that can be activated by hotkeys and some things you can drag around. Some tools should, upon activation, disable camera controls, and prevent usage of all other tools that logically don’t fit with it, but should potentially leave things like an omnipresent Escape menu or a HUD toggle available.

I’m seeing three options:

  1. Split the input code up into many actors, and have them disable input on other actors by direct communication when necessary. This sounds like it will be highly interconnected and get messy really fast.
  2. Use the enhanced input system and switch input mapping contexts. This at least gets rid of the direct communication between actors. While I like this concept for slightly larger things like switching control schemes for e.g. a character on foot vs. in a vehicle, I’m not sure it’s a good choice for the more fine-grained thing I need.
  3. Use gameplay abilities. I have not yet used them and only watched the insightful Guided Tour of Gameplay Abilities, but I think I could use this to actually make each part of the camera controls (orbit, zoom, pan) and editing tools its own action, and tightly control what blocks what. Does that sound sane?

Please share your wisdom with me, people who have implemented mutually-exclusive input patterns!

An enum of the mode, a switch to call the right function and however many events are needed to keep things individualized and legible.

If done right, this can likely be the best option.

Let’s take Blender as an example mesh editing application. You can use the MMB, combined with Shift and Ctrl, to orbit, pan and zoom the camera. When you press any of the usual tool buttons (G, R, S) you go into a separate mode to translate, rotate or scale. However, each of those modes still gives you the opportunity to press X, Y, Z (Shift to negate) to constrain to axes, hold Shift for precision mode and Ctrl for snapping. Many other tools provide extra hotkeys that override others, e.g. the FlyCam uses TAB to toggle gravity, which is usually for toggling Object/Edit mode.

Cramming this into a single input handler’s state machine will entangle all those tools and won’t make it easy to add or rip out tools. I’m of course not trying to rival Blender in complexity, but the principle holds.

Gameplay Abilities and Tags sounds like it could make for looser coupling and be Unreal’s answer to this problem of input/action interdependency, right?

What do you care how tangled things are if all you need to do is move indovidual nodes around?
Thats the point of using an enum with a select.
You have different paths for different modes that you simply connect a function call to…

I care because I have seen the pain and suffering that tight coupling causes in grown projects and teams. Having a single central switch with, in the case of the Blender example, easily 100+ states, sounds horrifying. For the purpose of my prototype, you’re right that it doesn’t matter much and it would probably suffice. But I’m also trying to learn what the engine has to offer and what scalable solutions would look like.

I have two ideas (or single just done from two opposite sides):

Create blueprint interface that you can drop in any blueprint. It will send over some events like “button_left pressed” etc. Then get input in blueprint like game mode, decide which actor is responsible for action, call its blueprint interface event. But this will make cthulu blueprint in game mode.

other way:
make dispatcher in game mode, with enum, and some easy way to check type of action/event triggered. All blueprints that need to handle some action should hook to that dispatcher, and decide themselves based on some state variables if this action is for them or not. You will not have central cthulu like spaghetti code, but everything will be everywhere. However to contain that mess you can use blueprint interfaces (code will be always in same function/event inside that interface) or blueprintable components, but those are messy.

And best solution would be C++

  • you can manage key bindings
  • maintaining (big) code function in C++/text is easier, and it is more readable
  • to write it all you probably need is some ifs case of etc, nothing advanced

It’s not an unworthy effort, but don’t look at what the engine has to offer, which is really less than 0…

Look at what c++ has to offer so you can exapnd the solution if need be.

That said, the engine itself makes heavy use of enums and switches for modality things - like the mode of motion in a character for instance.
You normally aren’t aware of this, simply because its coded up in the c++ of the character class. Unless you break down the code, you’d literally never know.

The task at hand is complex, so its only normal to end up with a jumble of noodles if you use BP.
Doing the same in c++ can be quite legible and snappy.

Personally I prefer switch and case statements to nested IFs. But a never ending list of strongly checked ifelse could work too.
My reason for the preference is the default option…

Lastly.
You can always override the onInput event and set everyhing up with a custom controller class / modify engine source.

Im not suggesting you go from the base of it, becauae let’s be honest. After you try and connect a new custom device and make a driver for it once, youll never want to deal with xinput again in your life…

But since the engine mostly handles the interfacing for you already, and peripherals generally just work, its worth noting that an override to the default engine’s handling is probably a good idea here.

You could literally just add the enum to the various input nodes.
Or even create a specific input node for each modality…

How you do it prbably depends more on how many states you need to handle.
For infinite numbers, you need to go modular.
For a finite amount, you could even just have inputs named differently…

Right, no need to write a custom dispatcher though - this is what the Unreal input system can do from the get-go.

I’ve listed three engine-provided approaches above. If you think they all are worthless, I’m honestly interested in why that is. Have you used them and found them to be lacking? Especially the Gameplay Abilities that encapsulate functionality and input and at the same time decouple communication via Tags sounds like a decent system, does it not? If I’d hook my own system in C++, the paradigm would probably look similar.

I have used both.
The gameplay ability is decent, but overall its cumbersome.
What you need is fairly simple.
Yes, you can levarage gameplay abilities if it also provides something else you need.
Otherwise, I’d stick to my own guns…

As a general rule of thumb, becauae unreal is about as performance friendly as shooting yourself in the foot right before a race, I stay away from anything that can lock a project into it.
If/when I use unreal it is mostly to publish marketplace stuff for it.
Real work is done directly in VS using other engines.
For hobby work, sure.
Anything else, unless it happens to be a paid consulting gig or a requirement, I defer to better engines.

That’s also why I’d say to stay away from the plugins.

On top of… isnt it better to learn how to do this at a base level so you can do it for any other engine/system at any time?
Youll have to put hours into it no matter what. Imho its best to lern stuff you can take to other things.

1 Like

Decent advice, thank you.

I can, and I have. It’s not about learning basic code architecture or C++, I consider myself somewhat proficient and experienced at both things as an engineer in VR and game development - I should maybe have clarified that to get more targeted answers. It’s about learning the tools that Unreal offers and understanding their principles so I can make informed decisions about using or not using them in future projects, or take what I learned about the principles that work / don’t work and use that knowledge independently of Unreal.

I will give the Gameplay Abilities a go when I get to it then, if only for science. I’ll report back if that was a smart idea or not so much.

1 Like