Unreal Editor in VR - Official megathread

Very cool Image! I was imagining something similar early on, but now I wonder if VR needs a more “everyday” programming metaphor, such as walking up to your microwave and punching the start button, thereby defining an event, object and action by direct interaction with objects in the world, instead of trying to represent such things with abstract nodes floating in VR space.

Perhaps, if you wanted a cube to rotate when you walk up to it. You would first establish the triggering event, by creating a collision volume in VR space and then walking into it to trigger it. Secondly you would define an action by reaching out and rotating the floating cube with your hand on a particular axis. So finally, you cause all objects with events or actions attached to glow a bright color depicting their function. Then you simply draw a connection between event objects and action objects and test the functionality by hitting a play button. Behind the scenes in Blueprint space, UE4 has auto-generated blueprint graphs to represent everything you just constructed in VR space. this could be the best of both worlds, allowing one to rough out game logic in VR, but then to refine it further with blueprints.

Its funny, most of the people who have expressed negative sentiment regarding editing VR in VR, feel just the opposite! They complain that waving their arms around building a level would be much too tiresome! hahaha I’m with you, lets get off our fat arses and be more creative and health conscious too! :slight_smile:

Creating extremely user friendly world editing tools for VR is a very interesting and important project.

I would say that is not what Epic is doing with our VR editing features. Some things will become more approachable, but our goal is to expose the existing Unreal Editor to VR, not to create a whole separate set of editing tools.

I’d encourage you to continue along with your project as you’ve described, as that is a whole different angle of approaching creativity in VR. You’re right there might be convergence, but I think it only be in the very long term.

Well said! It’s great that we’ll be able to share ideas – this is the age of exploration of user interfaces for VR, AR and motion controls, so everyone will have so much to learn.

I’m sure there is a way to retain accuracy while gesturing with the same hand, for most people. Probably the folks at LEAP have experimented with this more than we have.

I really like this idea. We did some tests a few months ago with allowing to “just grab” objects that were very close to your hands. It felt pretty good, but most people who tried it didn’t ever get close enough to objects to notice the feature existed. I think this is partly due to the learning curve of navigating around in VR. Until you are very comfortable with the controls, you just sort of want to stand still and interact with objects that are really far away rather than bringing them close enough to touch. I think we’ll bring this feature back though after thinking about the design a little more.

I wonder if this idea might be more useful when selecting items from the UI with direct touch since they actually exist in your reachable sphere all the time and also when you shrink the world scale down so it feels more like you are moving chess pieces on a game board. Anyway, very cool stuff. Can’t wait to try what you’ve built already! :slight_smile:

What technique/How are you rendering HUD/UI elements, and attach to an actor or component (like motion controller or hand) for VR and get good results?

I tried UMG with widgets and attaching to an actor and moving around and the performance was horrible, UMG was blurry when moving at all or blinking in and out. And it was a simple test menu.

Thanks

We are using normal UE4 features like MotionControllerComponent and WidgetComponent. Some things are physically attached, like the lasers to the hand meshes. These need to be 1:1 with RL and have very low latency. Other things that we’re drawing (such as the UI panels) are currently “tracking” the hand’s location manually. So unfortunately, they can lag behind a frame or so and don’t benefit from UE4’s late frame update. We’ll be improving this as we go.

As for widget components, we’ve made a number of improvements to them while working on this project, most of which are getting merged back regularly to the main branch and are available on GitHub. For example we solved an issue with motion blur and antialiasing when using masked materials. We’ve also added more configurability features. Several critical input improvements have been made. In fact, we just tracked down a serious issue yesterday that could cause clicks on buttons to not register consistently!

This is really awesome!,I find a way to create dream.
I want to know can support two people together into the same scene for editing?
If we can work together to build a scene in the virtual world, it will be very exciting!

It’s great to see this stuff being worked on but we could really use a basic VR template before the rift and vive are released in a months time. Something simple that’s been optimized for VR including motion controls would go a long way to helping out those interested while the full editor mode is being worked on.

So is there any chance you guys can put something simple together for 4.11?

I’ve always wondered if there was a way to make programming easier for the average person by using a more visual intuitive cause and effect style of creation. Like seeing what each block does and how it interacts with other blocks in the chain. Visualising the information flow and what happens at every point in the circuit in real time or frame by frame.

Anyone can understand how something like a car works by watching videos simplifying everything until you get down to the basic physics involved.

Hello, i just wanted to let you know i bought the HTC Vive JUST for this ! :smiley:
For me, building world’s while being inside is already amusement, a dream comes true!
I can’t wait for March 16 to see more news about Unreal VR Editor!

I’m really impressed how Unity is thinking about the future of content creation in VR, everything from:

  1. Zero Interfaces - more of a direct manipulation of VR rather than symbolic manipulation with buttons and sliders and such.

  2. Smart Responsive Assets - that self configure or adapt themselves to the users intent expressed with voice and gestures.

  3. Intelligent Assistance - Machine learning systems that flatten learning curves for users by augmenting their creative process with AI skills

Very good article and podcast here:

http://www.roadtovr.com/future-unity-content-creation-tools-smart-assets/

I’m extremely interested in these ideas and starting to work on methods of expressing one’s objectives and intentions to a computer without having to use a formal programming/scripting language such as C++ or Blueprints. Can we communicate our ideas in rough approximations to a machine without having to provide the precision of existing tools? If the machine can discern our intentions and augment our skills then much simpler tools and less cluttered interfaces would suffice.

I hope a “Mixed Reality” plugin gets developed for Unreal Engine so developers can make compelling videos that properly represent their VR creations. I love watching these type of videos, absolutely entertaining and important for marketing:

I’ve built a working green screen keyer in UE4’s material system for a recent video production/virtual set project, so I know that one part of the equation is feasible to do. :wink:

GreenTest.png

Wouldn’t it be cool to tracking a real physical mouse and keyboard 1:1 to a virtual representation of a mouse and keyboard in VR using lighthouse, so that a user can really be productive with traditional 2D interfaces in VR, such as complex parts of the editor like blueprints? Why do everything with touch controllers? You could simply choose the best input device for any particular task. Tracking a physical keyboard, means that you could see and pick it up and move it around, but know precisely where to peck the keys. With a flat looking representation of a BluePrint graph hovering in front of you, you could reach for the mouse and have precision control over the nodes and their connections.

Hello everyone, I hoped you all watched the GDC livestream. I made a survey for the VR Editor to get a better grasp of who is interested in the VR Editor, which hardware you want to use, and what you would like to see in the future.

You can find the survey here

Thanks in advance for taking the survey and have a nice day!

+1 , Helping answer the survey questions will help tremendously for those who are planning or even thinking about using vr-editor. Help shape the product :slight_smile:

Also, I added a link to the original post, to the preview branch of VR-Editor for those that want to look at source or want dive in an check it out (if you have right hardware)

Hi , great presentation at GDC! Am very excited after seeing all the new features. Things I like the most:

  1. The virtual workspace being able to position the editor windows around the scene. (Not sure how the material or blueprint editor can be used in VR, except for reference)

  2. Love the colored laser pointers that I assume change color depending on which buttons are being pressed. Brilliant!

Can’t wait to test it out myself! Thanks for all the hard work and putting it in our hands early! :slight_smile:

(Just competed the survey)

Hey there @DarkVeil !

I just got the VR Editor built from the Github source, I’ve got a DK2 and a Hydra.
I included getnamo’s plugin in my project , however seeing as its for engine version 4.10 it’s a no go.
I love the transition into the level though !

P.S: @getnamo , you would be a hero if you updated it to build for the VR Editor <3 !

It would need some work to get the Hydras working. For example the input needs to be mapped for the VR Editor. But that is not really possible for us without engine support.

So close yet so far.
I guess it’s gonna be a long wait if I don’t have a Vive </3