can we draw freehand shapes?
can we import ai vector files?
can we import layered ps files?
how the fonts work in UMG?
regards
can we draw freehand shapes?
can we import ai vector files?
can we import layered ps files?
how the fonts work in UMG?
regards
As per the tutorial from that I linked each Button is its own Widget and so in that case you can just reference each “Button” by the Widget Class (in that cast ListItem). That’s how I’m storing the Button references.
If you don’t have an entire Widget per Button like that you can store references to the actual Button type using a UObject.
Indeed, I found a bit later that I could just store the widget references in an array and then obtain references to the child button. Thanks for your answer!
Question.
How do you use Drag&Drop in UMG ?
Any chance for SVG support?
It would be cool if you could provide SVG assets for icons, buttons, etc at compile time and then at runtime UE would rasterize the SVGs based on the current user resolution.
How would you go about creating a main menu option for selecting levels within a game using UMG?
Button > On Clicked > Remove From Viewport > Show Mouse Off > Open Level
I’m curious about a few of the design decisions for UMG:
Why is binding on the widget class itself? It would be nice to be able to offer an alternate data source, in a more MVVM-style pattern. Furthermore, it would be nice to limit the actual evaluation of the data to only when it changes (or an event says to refresh it’s local value), as the evaluation might be expensive. You can certainly approximate this by adding a gate and resetting it when changes occur, but the main reason I want this is to reduce the amount of logic a UI artist has to implement themselves, as well as keep as much logic out of the widget BP (that doesn’t directly control what the widget is doing) as possible. Especially since these are binary assets, and right now a UI artist and engineer can’t effectively work on the same widget at the same time (not without a lot of pain, and a lot of boilerplate hookup: create a variable that is the data source, and create a function for every data item that simply gets the data).
I’m also a fan of defining resources (dictionaries of values) that can be bound to directly, allowing easier style switching. Styles right now seem specific to a widget hierarchy, which makes them not nearly as useful. Why aren’t the styles more modular? Why can’t you bind elements to shared parameters?
Less important, but still useful is the Visual State Manager, as seen in WPF. Being able to define states on a control (in a loose fashion), override those states, or add new ones on existing controls, and have the system auto generate transition animations is a great productivity booster. You utilize similar state machines in other areas of the engine, would it be possible to utilize that for this?
I’d also like to see vector support come in at some point. Although you can make items that resize using things like N-slices, they don’t work for changing resolutions (the regions end up scaled down). Basically, every resolution you support needs its own asset, increasing the package size. It would be nice to have proper resolution-independent UI using vectors: if rasterizing them is considered a performance problem, why not just cache the results? Takes up the same memory the texture itself would need, and you can apply the cache to static sub-hierarchies (reducing total draw time overall). Selective retention is a great why to balance performance and visual quality.
As you can probably tell, a lot of this stuff is basically WPF features (especially when paired with Blend). I know there are areas of WPF that don’t really make sense in a game-application, but there are still a lot of great features that could be implemented. I know you guys have WPF experience over there, so I assume you must have at least considered all of these features at some point. As such, I am curious as to why you opted against them. As an engineer, I’m probably not going to spend a lot of time in UMG myself, but I still feel where it is at right now doesn’t suggest a good workflow yet for UI artists, designers, and UI engineers together.
id like to see the following example: say you have an inventory screen with a uniform grid panel and you fill that panel with userwidgets and end up with a 5x5 grid or similar. then handle the case where the user does a right mouse click over one of those userwidgets in game. what is the best way to get the mouse coords to be able to show a context menu widget at those coords. i think this would be a good example because you could cover a lot of stuff, drag and drop, screen coords, panel slots etc…
I’m in Australia, so i am unable to attend the stream unless i wanna be up and about at 5am. but i look forward to watching it later.
I won’t be able to catch the stream live, unfortunately, but I do have a couple of questions:
Cheers!
-Jan
Could you explain what was meant by the following quote from the UE 4.5 Release notes?
Is this related to the the following card (Trello) in the roadmap?
As far as I understand, there is no support for vectors/SVGs so I don’t quite get how the button graphic is resolution-independent.
Since the text comes from fonts which contain vectory glyphs, I could see how that could be resolution-independent. However, you could also just be scaling down the text textures there too, depends how it is implemented behind the scenes.
Edit: Found this nugget from Michael Noland, text does seem to be resolution-independent.
For those users (gabrielefx, RPotter and anyone else) interested in vector support, please post a reply to this post Unreal Engine Vector Graphics Support so we can show Epic how many people are interested in this.
I have a lot of User Widgets in my project, what about folders for User Widgets?
Very excited about this stream! I have a few questions:
You’ll be able to categorize them in 4.6
My questions about UMG:
Thank you for answering my question about SVG support nick.
I agree with everything you said about tessellation based vector graphic support. Supporting things like thin lines with a geometry based solution would be very hard without high levels of MSAA which isn’t even possible with the current deferred renderer.
From a performance standpoint, I think the best solution would be adding vector graphic support via CPU rasterisation.
Certain image assets could be stored as SVG, then based on the user’s current display resolution, the SVG would be parsed/rasterized on the CPU to the target resolution at run time using the open source, BSD-licensed AGG library (http://www.antigrain.com/) and uploaded to the GPU as a texture.
Was not able to watch the stream live, but I just like to throw a thank you to and Matt Kuhlenschmidt.
For answering the qustions in a great and detaild way
You guys are great.
Cheers!
where can we see the recorded stream?
Thanks zeustiak.