Web UI (HTML/CSS/JS Interface Widgets)

Alright so after looking more into this here is a quick prototype that I put together using blueprints. I’ll obviously develop something more streamlined via C++ for the plugin itself but at least this allows us to experiment with the URetainerBox.

If you start with the example project these are some basic changes to get things up and running. First go into the MyController blueprint and enable the following options:

1.PNG

This provides a “clickable interface” and allows any actor to respond to the OnClicked() or OnBeginCursorOver() and OnEndCursorOver() events. Now create a temporary blueprint called MyActor and have it parented from StaticMeshActor instead of just the default Actor class. Then add the following nodes to the event graph for debugging:

Once this blueprint is created you can go into MyMap and replace any static mesh actors with your custom blueprint. Right-click on each mesh you’d like to debug and select “Replace Selected Actors with”:

3.PNG

Then open the InterfaceHUD blueprint and change the default input mode from UI Only to Game and UI instead:

Also open the MyHUD blueprint and change the default visibility from Visible to Self Hit Test Invisible:

6.PNG

Now open the WebInterface user widget and wrap the Browser sub-widget with a Retainer Box.

4.PNG

After debugging I’ve discovered that Hit Test Invisible is very buggy (not surprised) and doesn’t always work as intended. For instance even if I have Hit Test Invisible set on the entire WebInterface widget the Retainer Box is still treated as Visible for some reason even though it’s a child widget. So the Retainer Box unfortunately blocks mouse input from passing through if Visible regardless of the parent widget. Therefore the safest setup is to make everything Self Hit Test Invisible and only toggle the visibility of the Browser variable:

5.PNG

Also not surprising is that the engine developers didn’t expose the render target of a URetainerBox in any way. There’s a function for GetEffectMaterial() but no function for GetRenderTarget(). Even worse they made the local variable completely private instead of protected because they like making things that much harder with their incompetence.

Therefore the only way to get the render target is to actually have the internal mechanics of the URetainerBox set the texture parameter of your material, and then use GetTextureParameterValue() to obtain the reference. Otherwise you’d have to use a UWidgetComponent but that is more suited to having the widget in 3D world space, not 2D screen space. Moreover the URetainerBox uses the exact dimensions of the widget for the render target, so there’s no extra settings to deal with in terms of dimensions.

Since we can’t directly reference the render target, make sure you use the exact same parameter name for the URetainerBox as you do in your call to GetTextureParameterValue() along with the texture parameter in the material:

Note that the texture parameter matches the widget and the material type is “User Interface” along with a blend mode of “Translucent”. Again we are doing this for absolutely no reason other than to obtain a reference to the render target itself. Hopefully we don’t have to copy/paste too much C++ from the SRetainerWidget codebase to make this work natively in the WebUI plugin.

At this point if you set the Browser sub-widget in your WebInterface blueprint to Hit Test Invisible you can actually click PLAY and test your “clickable interface” to see if it’s working. When you move the mouse around you should see hover events for the various static mesh actors that you replaced. If you click on them you should see a click event as well.

Now our goal is to allow these clicks to pass through behind the interface based on the opacity of the render target under the mouse cursor. It would be great if we could reroute the click events by overriding methods such as UpdateAndDispatchHitBoxClickEvents() in AHUD but as usual the short-sighted engine developers did not make that function virtual. Therefore your recommendation of toggling the visibility on each tick should at least minimize any issues with built-in click events for the time being. However the long term goal should still be to find a way to implement something natively in the engine that will intercept viewport hitboxes.

So all we really need to do for now is read the pixel at the current mouse location on each tick and toggle the visibility of the Browser sub-widget accordingly. Go back to your MyHUD blueprint and setup something similar to this in the OnTick() event (source is below for copy/pasting directly into blueprints):

https://pastebin.com/raw/ftxYCgQi

![12.PNG|761x238](upload://7a2feGvJPmh27fgnG4bIju79NCx.png)

Now click PLAY again and you should be able to interact with the meshes behind the interface based on the opacity of each pixel along with that 0.333 threshold in the screenshots. Keep in mind this is not an ideal solution as I’ve already noticed that the web page will occasionally keep buttons focused/highlighted when toggling to Hit Test Invisible. Therefore we’ll probably want to manually trigger a MouseLeave event or create some kind of frame delay. This is why actually injecting into the hitbox detection of the viewport is a more appropriate solution, but for now this is at least a workable prototype.

I will draft up something more concrete in C++ for the WebUI plugin so this can be natively supported with minimal setup. I’ll let you know if I have any further updates on this. Thanks again for the suggestion, and hope this gives you a good start on having a more dynamic interface!

1 Like