Best approach to interactions with large actor

Hello there,

My current project has an interaction system based on raycasting: everytime F is pressed a ray is casted from the center of the screen, and if the hit object can be interacted with, the corresponding event will be triggered. This works well for simple objects, like doors for example. However, I am not sure how I should go about larger systems. For example, take a spaceship made of dozens of static meshes, with perhaps 10 of them that are buttons. How can I distinguish which button has been interacted with and associate a function to it?
I see two options:
First, making one actor per button (although it seems overkill to have a whole actor for a button, from my newbie point of view). But wouldn’t that render editing the design of the ship annoying, having to coordinate all the actors?
Other solution, having one actor with a lot of interactable components, each holding a mesh. But then, I think that the ray cast is actor based and will return the actor as the hit object instead of the component holding the mesh, so I don’t know how to differentiate between buttons…

Any pointers would be appreciated :smiley:
Thank you

They can be separate blueprint actors, or, you can trace to the components of an actor.

The line trace tells you which component you hit

Probably also, an issue for the player, is knowing which part of the object they are about to interact with. That’s when an interaction cue comes in handy

interact

In the case of a spaceship, you would probably go for a combination. A console could be one actor, a door another etc…

The way I solve this is to use a special actor component for “interactable,” rather than just a special actor. I can then add one or more interactable components to any actor I want to be able to interact with.
Also, I put an overlapping test sphere in front of the character, and have my interactable components also have overlapping spheres; when the two overlap, I add the interactable as an option for the character. This means that when I’m close enough, and looking at the interactable, I can show “press F to build emotional connection with disposable NPC” or whatever on the screen when appropriate, without having to cast rays. Also, I can show more than one possible interaction, and have some UI to toggle which one I’m most interested in.

Feature creep is now real, I am so adding this to all my projects.


@Serwin_F I extend static mesh components. This is very close to what the above mentions, but the component already has a mesh. It also has a graph, variables, functionality, inheritance and an interface on top.

You can build smart, versatile interactable pieces of geometry that are not fully flegded actors but can be attched to actors.

When you retrive a component ref from a trace, you can tell it to do its own thing since it’s full of logic. Comps can reach out to actor that own them easily.

The best part comes from the fact that comps liks this can be added and removed dynamically.