[Twitch] Support Stream - Unreal Motion Graphics (UMG) UI - Oct. 21, 2014

I’m curious about a few of the design decisions for UMG:
Why is binding on the widget class itself? It would be nice to be able to offer an alternate data source, in a more MVVM-style pattern. Furthermore, it would be nice to limit the actual evaluation of the data to only when it changes (or an event says to refresh it’s local value), as the evaluation might be expensive. You can certainly approximate this by adding a gate and resetting it when changes occur, but the main reason I want this is to reduce the amount of logic a UI artist has to implement themselves, as well as keep as much logic out of the widget BP (that doesn’t directly control what the widget is doing) as possible. Especially since these are binary assets, and right now a UI artist and engineer can’t effectively work on the same widget at the same time (not without a lot of pain, and a lot of boilerplate hookup: create a variable that is the data source, and create a function for every data item that simply gets the data).

I’m also a fan of defining resources (dictionaries of values) that can be bound to directly, allowing easier style switching. Styles right now seem specific to a widget hierarchy, which makes them not nearly as useful. Why aren’t the styles more modular? Why can’t you bind elements to shared parameters?

Less important, but still useful is the Visual State Manager, as seen in WPF. Being able to define states on a control (in a loose fashion), override those states, or add new ones on existing controls, and have the system auto generate transition animations is a great productivity booster. You utilize similar state machines in other areas of the engine, would it be possible to utilize that for this?

I’d also like to see vector support come in at some point. Although you can make items that resize using things like N-slices, they don’t work for changing resolutions (the regions end up scaled down). Basically, every resolution you support needs its own asset, increasing the package size. It would be nice to have proper resolution-independent UI using vectors: if rasterizing them is considered a performance problem, why not just cache the results? Takes up the same memory the texture itself would need, and you can apply the cache to static sub-hierarchies (reducing total draw time overall). Selective retention is a great why to balance performance and visual quality.

As you can probably tell, a lot of this stuff is basically WPF features (especially when paired with Blend). I know there are areas of WPF that don’t really make sense in a game-application, but there are still a lot of great features that could be implemented. I know you guys have WPF experience over there, so I assume you must have at least considered all of these features at some point. As such, I am curious as to why you opted against them. As an engineer, I’m probably not going to spend a lot of time in UMG myself, but I still feel where it is at right now doesn’t suggest a good workflow yet for UI artists, designers, and UI engineers together.