Hi,
I have been running around in circles on an impossible issue (relatively impossible anyway).
In the world of VR you have cases when you want to use the HMD to look at an object and trace it. This, however, doesn’t work in the UMG Environment beyond the WidgetComponent, which does accept trace under Visibility channel for example.
Everything I have read leads to the distinct possibility that unless Widgets support collision channels, then the trace option is simply not there.
Now there are slow workarounds to this like getting all the child components of the main Widget, then determining if the trace passed through the location (via some 2d projection magic) and hit a userwidget.
However this is rather consuming and seems a bit over the top compered to a dedicated widget trace setup as per Objects.
I suggest a new Trace function to deal with 3D UMG Widgets that returns a hit result with WidgetComponent as Hit Actor and UserWidgets as HitComponents.
This would avoid any case where in order to handle the events caused by MouseEnter/MouseLeave, the mouse must be moved in code (or rama’s node) to the widget position in order to fire OnHovered for example. This is a big issue with VR menus and UMG as there seems to be no way to determine if the Unreal window is modal to disable forced mouse positioning during a HMD Gaze Cursor + UMG scenario.
Thanks
Tracing is inadequate I’m afraid. Whatever is interacting with the widget must perform the same sort of mouse or finger related actions expected in the same way Slate would send them. You also can’t make the assumption that what you hit is actually a UUserWidget, it may be a UWidget, and in reality - they’re all going to be slate widgets, some with a corresponding UWidget, some without. The correct implementation has to do with the VR-Editor does, you need a component or components in the world, tracing to find the widget component, then grabbing the underlying slate widget tree and widget path in the localspace of the widget, then transmitting all the standard slate events every frame that it updates, properly sending mouse enter/leave, up/down, wheel…etc. You’ll also need to make sure those components get hardware input no matter what - which is a bit tricky, for the VR-Editor we had to add a SlateInputPreprocessor to make sure they got hardware input even if the viewport didn’t have focus. You have to do this in case you focus a widget in the world - as that currently causes problems with the viewport and the player input system getting it.
Thanks for the reply Nick.
I understand what you are saying here and after a few days trawling the underlying slate code to find out how to force bIsHovered = true on trace (to no effect) I kind of gave up on it as I cannot start writing core changes to the widgets in our project right now.
Now call me a noob if you like but what I can’t get my head around is that if the process knows the localspace locations of the widgets to be painted each frame, why can’t the widget component store these for use in worldspace when trying to get a reference of a widget under a location such as the hit location on the WidgetComponent?
Sorry if it’s a rudimentary question. This slate business is quite new to me.
Knowing the widgets under a specific local location on top of the widget surface is not good enough. Some external component functioning as a virtual finger must pretend to be a real finger and interact with the surface. This requires telling FSlateApplication to route the necessary commands into that space using the given widget tree of the widget component.