's the tutorial I said I would do a quick write up on in the stream on making widgets appear over actors in the world (but in screenspace).
@NickDarnell Thanks for the stream, it’s good to see these things happening on a semi-regular basis. You mentioned on the stream about elaborating on some of the longer questions in the thread (including mine) after the stream, so I was wondering if you were still planning to do that.
The binding is on the widget itself because it’s easy to understand. Where it goes from there is up to you. Several people have chosen to define the logic inside a UUserWidget subclass in C++, then they simply bind the logic to themselves after reparenting the widget, keeps the logic in C++ and away from the UI designer. Others have chosen to pass a Model object into the widget that they sample in their bindings. Both of those routes require a C++ developer because Blueprints do not permit constructing arbitrary UObjects yet, but that’s likely to change soon.
Future versions will probably allow bindings to be specified on a member object’s members to reduce having to make the extra step you mentioned.
Evaluating only when data changes is a difficult thing to automatically determine. There’s the KnockoutJS approach which takes all bindings and simulates running them to determine all values accessed to determine the tree of possible values that could change and invalidate the UI. While that method is super slick - we don’t have any system in UE4 for doing anything like that. It’s also not how Slate works. Slate has TAttributes for generally any value on a widget to allow you to bind it to a function to massage the data for the UI, and always provide the latest data. Or to simply pass it a literal value to prevent calling a delegate. The whole UE4 editor is written with that approach.
If you’d like to not have bindings run every frame it’s up to you, you can choose to use the Set______ functions instead to assign the value when you’ve detected it has changed. We may come up with something slicker in the future that allows more customization about how the bindings work. If it’s expensive to evaluate, you could also build the cache into your View Model that you’re binding to the UI.
Time. The plan is to build a modular system, in the mean time we’ll take steps to make things more reusable, like the named slot feature I mentioned in the stream.
Maybe, but not any time soon.
Just caching the result only works good for static UIs. I think it will be a lot of work to find a nice balance of caching vs. live rendering vs. lerping between generated mips during animations. I also think you’d encounter a lot of problems finding a good way to not cache too much, large full screen backgrounds on UIs - you wouldn’t want to cache the entire surface area, you’d actually want to N-Slice the sections of the vector graphics so that you don’t have to cache large sections of just a flat color.
Some of it’s philosophy differences, but a lot of it is time and resource constraints.
i’m usually working with c++ and its not an issue but constructing UObjects from Blueprints would be very useful. can we have that asap.
I just finished watching the archived stream and there is one subject I’d like to expand on that was briefly mentioned. Matt mentions one eventual performance improvement would be to automatically atlas textures. What about premade atlas textures? Most of our HUD elements are already atlas’d and delimited through FCanvasIcons defined in our AHUD actor. But I couldn’t find a way to recreate the equivalent in UMG using either an asset type or a widget.