COMING SOON: ACCELERATED PAINT SUPPORT ON MAC AND LINUX
The WebUI plugin allows developers to create web-based user interfaces with HTML, JavaScript, and CSS using blueprints. It’s powered by a custom Chromium web browser on desktop and the built-in WebView browsers on mobile. The accelerated paint feature allows for a shared texture between Chromium and the Unreal Engine, which provides unmatched performance with lightning fast latency and no frame drops!
This plugin also includes a robust JSON library that provides integrated management of objects, arrays, and primitive data types to seamlessly interop between JavaScript and blueprints. It comes with an example project that demonstrates a startup map with a volume slider and FPS counter. This plugin includes multiplayer support by removing the browser in dedicated server builds without changing any blueprint classes.
You can even put these widgets physically in the 3D world with a virtual laser pointer! There’s full transparency support for both 2D and 3D. This is very useful for RTS style games that require the mouse cursor to always be visible (because the user commonly clicks back and forth between the game and interface). Also VR games that implement 3D user interfaces and require them to be intuitive as the user would have a realistic expectation to be able to click through the transparent parts of the browser.
Support for 3D widgets with passthrough transparency
The WebUI plugin is designed for JSON-based communication between UE5 and JavaScript. This not only provides the ability to update your interface from the engine using JSON but also for your interface to trigger events in-engine with optional JSON data.
Trigger UE5 events with optional JSON data from JavaScript
This is the core functionality not found in the default web browser widget (that comes with the engine) which allows the WebUI plugin to be utilized for in-game interfaces.
It’s also a lot cleaner and more reliable than using the integrated object converter built into the engine, as this can easily lead to all kinds of type mismatches and issues which are just not worth the hassle or bugs.
Great plugin! I know Epic have a backlogged bug regarding custom cursors causing duplicate pointers when using the web browser.
Any idea for a work around?
Sorry for the late reply I don’t check the forums very often. I haven’t tested custom cursors so I’m not aware of any cursor issues with the web browser. But I will definitely keep it in mind and let you know if I notice any problems or come across a solution.
Is this topic an appropriate place to ask any questions related to this plugin?
However I will ask, if you consider that it’s unnapropriate please tell me, I’ll remove this comment.
So my questions are.
I have this WebUI Set up as written down in documentation.
Then I have 4 separate levels, and 4 different gamemodes respectively.
The HUD set for this game modes are the WebUI Hud for each of them
I share data between this levels using GameInstance class.
The problem is that the UI is reloaded again each time a OpenLevel is called.
1.) How I can make the WebUI persistent across all levels, without stream levels if possible
2.) Is it possible to remove the loader in the beginning?
3.) Is it possible to set game and UI input mode and the event propagation from the UI to only trigger on actual elements and not on transparent canvas.
So when I will set some buttons on the bottom using fullscreen webui, my player controller inputs should also work
#include "PersistentUserWidget.h"
UPersistentUserWidget::UPersistentUserWidget( const FObjectInitializer& ObjectInitializer )
: Super( ObjectInitializer )
{
//
}
void UPersistentUserWidget::OnLevelRemovedFromWorld( ULevel* InLevel, UWorld* InWorld )
{
// If the InLevel is null, it's a signal that the entire world is about to disappear, so
// go ahead and remove this widget from the viewport, it could be holding onto too many
// dangerous actor references that won't carry over into the next world.
if ( InLevel == nullptr && InWorld == GetWorld() )
{
//RemoveFromParent();
}
}
I’ll make a note to add this PersistentUserWidget to the 4.22 version of the plugin for blueprint-only projects. If you check out the comments in the code, be sure not to reference anything other than WebUI widgets in this blueprint or your game could crash.
Yes you can hide the loader, but it is unfortunately built-in to the engine code. However the example project already demonstrates how to solve this problem. When adding your widget to the viewport, set its initial visibility to hidden. Then setup a JavaScript ready event handler which notifies your blueprint that the page has fully loaded the DOM. This “ready” event can subsequently change the visibility of your widget, and you will not see the loader.
There is no way to redirect click events through the transparent part of the canvas directly in the engine. That requires JavaScript to do the DOM element detection and relay back to blueprints to generate a click event underneath the widget if the event bubbled up to document.click() or $(‘body’).click() as well. But that would most likely create a laggy experience for anything gameplay related.
However one way to work around this is to use multiple WebUI widgets that are docked to the sides of the screen or wherever you’d like. Then players can still click around these widgets since the transparent areas are now actually transparent in the viewport. So while a single fullscreen widget is best for most scenarios, sometimes breaking your interface up into smaller widgets can at least give you some access to things underneath the interface.
For the persistent UI that is a great find, million thanks. I will try this right now.
Thanks for the idea! But I recently found a simpler method just need to use ShowInitialThrobber(false) in the inputs of the widget.
It’s a pitty that there isn’t any build in solution for that, as I said above I also didn’t see any other solution then passing in and out the dom element detection, and then setting the visibility of the widget. I will investigate this further, as I don’t like the solution to split the ui elements to different widgets. I want an integral solution for this.
Yes that is a good point about the ShowInitialThrobber(false) but I believe the thinking behind leaving it enabled was that some developers might still want the simple loader to represent the interface is loading or perhaps didn’t want them to think nothing was happening at all in case they ran into a loading issue.
However if you’re looking to hide the loader and create a more seamless experience I would highly recommend considering a handshake system. What I mean is you have the WebUI widget initially hidden and your HTML/CSS/JS to have everything in the DOM also hidden on load. Then as previously mentioned the interface can notify the engine in blueprints that the DOM is ready but now all the HTML/CSS is hidden, so when the engine makes the widget visible the first render frame is guaranteed to be transparent and there’s no opportunity for any weird glitches we’ve noticed during debugging with many of our testers over the years (such as occasional white flickers or weird renders prior to page load).
Now you can have the engine relay back to the browser that the widget is visible and JS can trigger any of your initial fades or animations to present your interface. This provides a seamless load and some peace of mind that the engine isn’t going to do anything weird, but it is just my recommendation and you should of course do whatever works best for you.
Also keep in mind, while it is not ideal, you could use a combination of the methods I previously laid out for clicking underneath the interface. So for instance, if you don’t want to sacrifice fullscreen animations and whatnot, then don’t, just use a “hit test invisible” fullscreen WebUI widget behind a few smaller “visible” WebUI widgets that are actually clickable/focusable. Now you can have fullscreen anything but still isolate clickable areas to what form elements you overlay on top.
Again while this not ideal because you’re basically keeping a non-clickable fullscreen widget in-sync with a bunch of small clickable ones, it’s about the best I can come up with at this point until an in-process web interface is available such as WebKit that can execute JS events in engine ticks.
hello again, sorry for bothering you, I thought of an idea of getiting current texture referenced in webui material and use it’s pixel data to verify which pixel is completly transparent and if it is it will set the widget visibility to hit test invisible, and if the pixel from the x, y of the mouse current position will not be transparent then set the widget visibility to visible. It will run this check on every tick. What do you think of this solution to handle the “click through” transparent zones of the web ui layer? Is it viable?
I unfortunately have not looked into this problem just yet. I understand custom cursors are common in certain games but it’s not a high priority at this time since you can just not use custom cursors (or for the time being hide the engine cursor and render your own widget/brush at the cursor location). I will definitely look into this issue whenever I get a chance to experiment with custom cursors, I just haven’t done so for any game at this point.
That is a really great idea! I was hyper-focused on JavaScript and whether DOM elements were actually clickable or not. But that would be an awesome way to approximate the interface using some kind of opacity threshold similar to how materials have 0.333 for opacity masks. As I noted in the FAQ you can use a URetainerBox to render a widget directly to a UTextureRenderTarget2D which can be used with the ReadRenderTargetPixel() function. Once we have that pixel data we can reroute the click events based on the opacity threshold or perhaps just toggle hit test visibility as you mentioned.
I will look more into this and see if I can develop some kind of viable solution. Thanks for the suggestion!
You can still find the plugin (for versions 4.16-4.20) on the marketplace via your library tab in the Epic Games Launcher. However it has been deactivated so it doesn’t show up in a search anymore and you cannot add it to your library if you hadn’t already.
As mentioned the plugin is free and on GitHub, but you will need to link your Epic Games account to your GitHub account using the instructions at unrealengine.com/ue4-on-github since the repository is private. Otherwise you will see the following 404 error page:
This is due to the terms and conditions of the Unreal Engine EULA: “Any public Distribution (i.e., intended for Engine Licensees generally) which includes Engine Tools (including as modified by you under the License) must take place either through the Marketplace (e.g., for distributing a Product’s modding tool or editor to end users) or through a fork of Epic’s GitHub UnrealEngine Network (e.g., for distributing source code).”
Even though this specific plugin isn’t classified as “engine tools” we plan to distribute modding plugins in the future. Therefore if our plugins are no longer being distributed on the marketplace we will have the least restrictions by distributing within the private Epic Games network on GitHub.
Alright so after looking more into this here is a quick prototype that I put together using blueprints. I’ll obviously develop something more streamlined via C++ for the plugin itself but at least this allows us to experiment with the URetainerBox.
If you start with the example project these are some basic changes to get things up and running. First go into the MyController blueprint and enable the following options:
This provides a “clickable interface” and allows any actor to respond to the OnClicked() or OnBeginCursorOver() and OnEndCursorOver() events. Now create a temporary blueprint called MyActor and have it parented from StaticMeshActor instead of just the default Actor class. Then add the following nodes to the event graph for debugging:
Once this blueprint is created you can go into MyMap and replace any static mesh actors with your custom blueprint. Right-click on each mesh you’d like to debug and select “Replace Selected Actors with”:
Then open the InterfaceHUD blueprint and change the default input mode from UI Only to Game and UI instead:
Also open the MyHUD blueprint and change the default visibility from Visible to Self Hit Test Invisible:
Now open the WebInterface user widget and wrap the Browser sub-widget with a Retainer Box.
After debugging I’ve discovered that Hit Test Invisible is very buggy (not surprised) and doesn’t always work as intended. For instance even if I have Hit Test Invisible set on the entire WebInterface widget the Retainer Box is still treated as Visible for some reason even though it’s a child widget. So the Retainer Box unfortunately blocks mouse input from passing through if Visible regardless of the parent widget. Therefore the safest setup is to make everything Self Hit Test Invisible and only toggle the visibility of the Browser variable:
Also not surprising is that the engine developers didn’t expose the render target of a URetainerBox in any way. There’s a function for GetEffectMaterial() but no function for GetRenderTarget(). Even worse they made the local variable completely private instead of protected because they like making things that much harder with their incompetence.
Therefore the only way to get the render target is to actually have the internal mechanics of the URetainerBox set the texture parameter of your material, and then use GetTextureParameterValue() to obtain the reference. Otherwise you’d have to use a UWidgetComponent but that is more suited to having the widget in 3D world space, not 2D screen space. Moreover the URetainerBox uses the exact dimensions of the widget for the render target, so there’s no extra settings to deal with in terms of dimensions.
Since we can’t directly reference the render target, make sure you use the exact same parameter name for the URetainerBox as you do in your call to GetTextureParameterValue() along with the texture parameter in the material:
Note that the texture parameter matches the widget and the material type is “User Interface” along with a blend mode of “Translucent”. Again we are doing this for absolutely no reason other than to obtain a reference to the render target itself. Hopefully we don’t have to copy/paste too much C++ from the SRetainerWidget codebase to make this work natively in the WebUI plugin.
At this point if you set the Browser sub-widget in your WebInterface blueprint to Hit Test Invisible you can actually click PLAY and test your “clickable interface” to see if it’s working. When you move the mouse around you should see hover events for the various static mesh actors that you replaced. If you click on them you should see a click event as well.
Now our goal is to allow these clicks to pass through behind the interface based on the opacity of the render target under the mouse cursor. It would be great if we could reroute the click events by overriding methods such as UpdateAndDispatchHitBoxClickEvents() in AHUD but as usual the short-sighted engine developers did not make that function virtual. Therefore your recommendation of toggling the visibility on each tick should at least minimize any issues with built-in click events for the time being. However the long term goal should still be to find a way to implement something natively in the engine that will intercept viewport hitboxes.
So all we really need to do for now is read the pixel at the current mouse location on each tick and toggle the visibility of the Browser sub-widget accordingly. Go back to your MyHUD blueprint and setup something similar to this in the OnTick() event (source is below for copy/pasting directly into blueprints):
Now click PLAY again and you should be able to interact with the meshes behind the interface based on the opacity of each pixel along with that 0.333 threshold in the screenshots. Keep in mind this is not an ideal solution as I’ve already noticed that the web page will occasionally keep buttons focused/highlighted when toggling to Hit Test Invisible. Therefore we’ll probably want to manually trigger a MouseLeave event or create some kind of frame delay. This is why actually injecting into the hitbox detection of the viewport is a more appropriate solution, but for now this is at least a workable prototype.
I will draft up something more concrete in C++ for the WebUI plugin so this can be natively supported with minimal setup. I’ll let you know if I have any further updates on this. Thanks again for the suggestion, and hope this gives you a good start on having a more dynamic interface!
Hey , I read through your implementation and was quite impressed)) You did a great job, in meantime this days I also expermented with that, but just only in c++. I’ve managed to get some results, Ill show you later or if you have a discord you can message me there)
Some notes (not sure about them all)
I didnt see any use of materials that where located in WebBrowserContent location. I tried to pin/unpin the nodes from the texture to material but it have no effect so I thought they are not used at all. Also no references that lead to some implications in the actual code for this materials.
I did a version without the need to bind to any mouse events, just check on the tick for now ( however it’s better if I could find a way to bind to a delegate inside WebInterface for OnMouseMove)
To pass the click or move event down to the game layer I see only 2 solutions as you already mentioned to somehow propagete them down. Or just to toggle visible/hit test visibility on the widget itself, for now I just toggle the visibility.
This way we can avoid having to deal with creating a base class for all meshes.
I will keep you in touch.
Again repeating myself I REALLY LOVE your plugin. It’s just amazing. I can donate up do 30$ if you allow me to. Thank you very much for it.
Hello again , so I recorded what I managed to do. But its still a rought work, not so configurable etc.
I am not even sure if I could do it without messing around with WebBrowserWidget itself. I am thinking of making an engine fork for my project separately. But I am still thinking…
Hello !
We are working on a project where we use the WebUI plugin for developing the user interface with React. We are currently facing some problems and it would be nice to be able to debug our app. It’s running perfectly in Chromium but when loaded in Unreal with WebUI its not working as intended.
As far as I understand, WebUI is using the internal Chromium Embedded Framework in UE4, which should have a debugger tool that can be opened on localhost: port.
Is it possible with WebUI to open the devTools? And if so, how do we know which port it’s using?