I need some basic suggestions on how to build a hierarchy of actor and components.
I want to have a cockpit, with controls which the user can interact with. The player will move their motion controller to hold the joystick and throttle, then let go to press a button or pull a lever on the control panel.
So far, I have a Pawn, which contains the cockpit mesh as the root node, a camera (locked to HMD) and two motion controllers with collision spheres:
(The VR Recenter and HMD Default Position are just to allow a “Re-center camera” function)
As you can see, I have also added a joystick too. But here my problems begin. I could do with some suggestions on the best way to organize this.
At the moment, I have an OnCollisionOverlapBegin in the EventGraph for the Pawn, and I cast to BP_Joystick to see if the collision sphere is intersecting the joystick. This doesn’t seem right, since I’ll end up having to cast for every interactable object just to find which one it is. It seems smarter to have the target object detect when the collision sphere is overlapping it, and respond appropriately… thing is, in the Joystick EventGraph I don’t know how to get a reference to the collision sphere. Maybe my VR hand should be a Scene Component or Actor Component blueprint since I’ll need to make it more complex later anyway (with meshes and different grips and so on). Another problem I see looming soon is that I’ll want to grab the joystick, and move it, so the motions of the motion controller need to go to the joystick somehow. It seems best to have that logic in the joystick rather than the pawn, no?
I could really do with just some basic suggestions on how to organize the fundamentals of interactions between my hands/motion controllers and component blueprints for my cockpit controls