Any word on when the sample project for this will come out?
Also what are the minimum header files you need to include in the PCH to make a custom editor for a class with access to the viewport to place actors etc? I’m having trouble working out all the dependencies to make this work with out access to the sample project.
They’re testing out the changes for C++ projects on the learn tab this week, and I’ll be getting the description/banner image/etc… stuff over to them tomorrow, so hopefully soon.
UnrealEd is sort of the data model owner, and LevelEditor is the module for working with the level editor viewports/etc… It depends on exactly what you’re trying to do, but a lot of the time you can get away with just depending on UnrealEd (plus the common ones like PropertyEditor or AssetRegistry if you need to deal with them). GEditor (of type UEditorEngine (actually UUnrealEdEngine but you shouldn’t rely on that in general)) is the core bit, letting you get access to the selection state / etc…
It’s a bit complicated but put simply it’s similar to a custom navigation system. I want to have a data structure for a level map that is simply a collections of nodes and paths where a path is just a pointer to another node (basically a bidirectional graph with metadata for each vertex). Nodes will also link to other actors in the level. I want an easy way to visualise and edit this in the editor but at runtime I’ll only need the data.
My idea is to wrap the data structure in a custom editor that visualises the nodes and paths and allows me to do things like move nodes in the viewport and add/remove/merge/split nodes. I realise that this could probably be done simply by exposing the properties of the nodes and editing them in the details panel but it would make the level design process much more error prone plus this gives me a good chance to learn how to extend the editor. I’m hoping to expose the functions needed to the details panel of the actor so it can be easily customised there.
You might consider making a custom editor mode (FEdMode), which can spawn a toolbox of slate widgets in the top left panel, and process additional input / render additional things in the editor viewport.
This is how mesh paint, landscape, and BSP editing work, for example. Have a look at mesh painting or the tile map ed mode as a starting point (landscape is actually closer to your workflow RE: spawning additional actors, but it’s complex enough that it may not be a good place to start learning).
Note: If you go down this path, wherever possible, use methods directly on FEdMode or via GetModeManager (for selection/etc…) rather than going thru GEditor directly. In the distant future we’d like to break reliance on GEditor / a global selection state, and allow editor modes to be used inside of other editors such as the BP editor, whenever it makes sense. The tile map editor is taking the first steps in that direction and it can be used either directly in the level editor or in the standalone tile map asset editor.
@
I am doing something very similar to this, and have been using a custom editor mode as suggests. If you’d like you can PM me and I’d be happy to share the code I have. It’s still very much work in progress, but it would be a decent reference or starting point.
@Michael
If you can offer any pointers on my approach it would be really appreciated. I initially had each element in my graph (nodes, connections, etc) as a separate actor, but scrapped that approach for a number of reasons. I now have a single graph actor with a self-contained data structure, and use hit proxies to detect interaction with and selection of specific elements. Since the data can’t easily be separated into simple properties, this leads to some obstacles. I’m currently using UObject derived proxies to represent the currently selected elements, and exposing the necessary values through a details customization and AddExternalProperty. This seems to be working, although it’s a bit of a hassle to keep things synced, especially with undo/redo. Does it sound too convoluted? Maybe I could compromise and store the entities as UObjects, but then I’d need to somehow convert the graph actor to a more compact and efficient data structure in-game.
Also, is there an easy way to hook into the snapping functionality from my editor mode, to disable it or customize it to snap to values other than a uniform grid? And is it tied inextricably to the currently selected actor’s location, or could I make it work with my setup (manipulating sub-elements of an actor, positioning the widget manually)?
Oh man thanks, I went back and looked around, since Im using 4.6 right now, but I found that the Material Editor uses both a graph, and viewport without the whole apps thing. Thanks man!
Registering a custom component visualiser though the manager seems to get rid of all the other visualisers. I think this is due to the way the component visualiser module is registering the modules though GUnrealEd which is still null when my plugin loads. I’ve Any tips?
That may be a bug that needs to be fixed; most other registration systems like that use a separate singleton that is initialized on first use to reduce loading order issues. I’m not sure if there are any existing examples of extending it in another module yet.
One thing you can try is to make your plugin module load in a later phase. You can set the phase per-module in the .uplugin file; for example:
Yeah the code for the component visualizer module code did seem pretty different to others. All it does is call GEditor->RegisterComponentVisualiser(). I ended up just making my plugin load post engine init and calling the GEditor function directly instead of going through the module. I realise this probably isn’t the correct way to do it but it works for now till the module is fixed.
I’m looking to create a custom game-specific editor for conversations. Something very similar to the Behavior Tree editor would be ideal for this, a ‘‘vertical’’ graph with only a few specific node types so the dialogue writers don’t have to deal with the complexity of full blueprint, and the ability to annotate nodes with little conditions.
I’ve already watched the entire stream and specifically wrote down your answer to one of the questions near the end concerning custom graph types, where you mentioned some of the things that would be required (base subclass of UEdGraphNode, GraphSchema, etc.). However… even knowing this, looking at the Behavior Tree Editor code as an example, it’s a pretty massive amount of code to just dive straight into.
How would you recommend to deal with this? I was just thinking of starting with the code in BehaviorTreeEditorModule (which basically seems to be the ‘‘entry point’’ into the entire system), and copy/paste pieces of code and change ‘‘BehaviorTree’’ to ‘‘Conversation’’ everywhere, but I’m not sure how feasible this approach is. Would you be able to recommend something better, or would this be the best way to go?
Hi, Thanks for the awesome tutorial. Can we please have the source to your demo project as it would make learning much easier. Could you upload it to github till the learn tab issue is resolved?
Is it possible to create a new CategoryType for the “Create Advanced Asset” menu from plugins? This menu seems to be more complicated than others, with the enum EAssetTypeCategories and the struct FAdvancedAssetCategory to create SubMenu based on the Category.
Having to scrolldown in a menu to access your category really don’t feel natural, do you plan to tweak that so the menu will expand until a certain number of “Custom Category” and then add the scrollbar if needed?
Also, Documentation and AssetIcons used in that menu by Paper2D are still in the Engine folder, does that means it’s wont be supported in the 4.8 release via plugins?
Asset icons can be added via plugins now (see FClassIconFinder::RegisterIconSource/UnregisterIconSource); I filed an issue for that and went ahead and added the icons to the main editor style as a stop-gap, but haven’t moved them into the plugin yet.
One-line tooltips can be in the plugin, but rich tool tip documentation is still currently limited to the Engine/Documentation/Source/Shared folder. I’ve mentioned that before but can’t find a JIRA on it, so I’ll file one. I think plugins can also define a docs URL overall, that might show up in the plugin wizard, not 100% sure?
RE: the scroll bar showing up, I’m not seeing that , are you adding a bunch of options or just one? I could see there being a max height before it throws in the scroll bar, and in general the look/feel of the filter menu isn’t ideal.
It’s do that after creating a single custom category, see the screenshot below. Ill try to remoe Paper2D once at home to see if the scrollbar disapear.
Thanks for the FClassIconFinder::RegisterIconSource/UnregisterIconSource and documentation tips
Best Regards
Edit: I tested on few computer for the scrollbar and it doesnt seems to happear on every computer even if the screen resolution is the same. Ill try to figure why its show up on some and not on others.
I was wondering if there was any documentation on the relationship between all of the different editor classes. At the moment I’m making a new asset type and I want it to be editable in both it’s own editor and the level viewport. It seems from studying some of the code base that this can be done though FEditorModeTools? I’m starting to form an idea of how it all fits together but there’s so many similarly named classes (FEditorModeTools and FEditorModeToolkit?), I think it’s something like this:
FEditorViewportClient and FEdMode can share FEditorModeTools that can be used to edit an object? Meaning if some action isn’t processed by the viewport client itself you can delegate it to the mode tools.
If there was some sort of diagram showing the class relationship it would be really helpful!
Can you explain it to me?
I took example style file from Paper2D, set ClassIcon and ClassThumbnail, then call FClassIconFinder::RegisterIconSource(StyleSet.Get());
But in the Editor I have white squeres instead of icons. Icons in the right place, and path is right (I cheked it in debbuger).