Advice on best approach

I’m fairly new to unreal and have created only a couple of prototype apps. I’m looking to build a 3D training experience in UE5 VR and wondering what would be a good approach.

At the moment a couple of key elements would be having some form of info triangle in 3d space, which when collided with the controller would produce a callout (which may contain text, images or video) or trigger an animation, in a similar location.

So in the example attached, the call out by the power button might produce some text about the power button and then demo a hand pressing it.

I’m wondering about a couple of things, The info triangles are currently setup as a single blueprint, should they be setup as children of a master blue print so that each one can inherit certain behaviours (collision detection, visibility ‘off’ on touch, rotation) from the master, but then be allocated individual behaviour for each action? I’m not sure about the structure of that.

Then for the call outs, I’m wondering about creating widgets for each event, and locate3d them in 3D space, would this be a sensible approach?

Any ideas would be appreciated.

Consider the following:

  • override the widget component itself:

image

This way you get a widget component with its own graph, variables and can script the desired behaviour right into it. This custom component can then be attached to anything else (the power button mayhap?) in the world or even spawned dynamically.

So in the example attached, the call out by the power button might produce some text about the power button and then demo a hand pressing it.

Lets say this is the power button actor:

  • it has a mesh
  • it has our custom widget component with exposed variables

This way you can config each widget manually. But you can also opt for holding the necessary data in the power button actor instead and have the widget pull it from its owner.