Unreal Editor in VR - Official megathread

O.O

Thank you so much JasonW! Awesome post.

We’ve been pondering these features already, and I think UE4’s architecture will make it easy to experiment with most of these. The animation and “sketch up” (world building) features especially resonate, but I’m surprised about voice commands a bit.

People are a bit shy about voicing commands in VR. They are already blind and having to animate. This is something the hardware will fix eventually with pass-thru cameras and such, but voice is a bit dicey still. Everything else is already on our “as soon as we can do it” list.

Thank you so much for the feedback and I appreciate the work you out the pictures!

Hi ! Thanks so much for your nice comments. I’m sure you guys have already dreamed up most of the cool ideas already, all I’m trying to do is demonstrate some ideas I had to the community. Illustrating ideas is way more fun than just writing about them. I’m just glad you guys are inviting us to participate with you as you explore VR’s potential.

As far as voice command, I understand your point. However, I think people mostly feel timid to speak voice commands when they have to remember what to say. I’m suggesting what you look at or point at or what’s visible in the floating UI menus already can just be spoken. the context would give people more confidence to use their voice. But perhaps, people are more worried about who might hear them when they can’t see others around them like you suggested. Its just a shame not to use one of our best communications assets, our voice.

I’ve been advocating the use of voice in both the editor (for vr) as well as in general. As far as remembering commands, it would be easy to do (for the user), just make the command “shortcut” customizable, akin to voice attack :smiley:

Creative tools in VR is a frontier where no one has the answers yet, so we are counting on the community and our peers in the industry to help inspire us and evolve the direction over time. I don’t think there is any other way it can be successful!

This is a good idea. I’ll definitely look into it.

Just decided to repost my thoughts here, felt I was taking over the thread too much! There’s not enough interesting discussion yet! :frowning:

VR Workspaces
Would like the ability to place the editor windows in user definable places, for example: 1. on the controller (like it is now) 2. Docked as a HUD in a desired position and size in the users field of view. 3. Docked at desired locations and sizes in the world, so the user could create their own virtual workspace… These three modes seem really interesting, why only use one approach. Also create a very intuitive way to grab the editor screens off the controller and stick it where you want, in the HUD or someplace in the world.

VR Measuring Tools
How about some extra tools such as a measuring tape, protractor, 3D guides and construction planes, for measuring out real world scale and layout. I love how Sketchup works, if you want a doorway 8 ft high and 3.5 ft wide its super easy to measure out with the guides first, then build on top of it. Snapping to construction guides is awesome. Most non-CAD 3D software don’t have good measuring or layout tools. I want to work more like a construction engineer in VR.

VR Voice Command
Voice command of what you are looking or pointing at, would be a good addition to direct manipulation with touch controllers. “Scale 150%”, “New light”, “Add material”, “Scale world 50%”, “Add tree mesh”… Seems like this would reduces the amount of pointing, calling up the browser, or difficulty inputing precise input values.

VR Animation
How about taking possession of a character in VR so you can use your own body to put it in a pose or teach it how to move or perform a complex action. Would of course require additional full body motion capture. I just like the idea of doing it in VR. When you jump inside a body you are animating first person, leave the body and you can refine the animation from a third person point of view. I can imagine onion skinning in VR and also manipulating a 3D spline for path based animation. So many interesting things to explore with VR animation.

VR Cinematics
Directing Matinee/Sequencer cutscenes in VR would be super cool. You first block out your shots, stage props and actors, set up the lighting and perform your own camera work and acting. Then you perform virtual editing and camera cuts. You could loop your sequence and keep adding layer on top of layer. You might manipulate time in similar ways as world scaling, with gestures.

VR Game Logic
I really want the ability to walk around my scene and set up all the game logic in VR. For example. I could add a collision volume in a doorway, and when I walk into it, a floating event alert would appear I would then connect this to a Blueprint handler that I add to the scene. when I touch the 3D blueprint hovering in space, it would suddenly expand into a representation of a blueprint graph with 3D plumbing and layout. There is yet, no programming metaphor in VR space that I’m aware of, but seems like blueprints should work. (I think Epic should sponsor a contest to design and illustrate the best and most exciting way to bring blueprints into VR space)

Destroy Mode
One thing I would love to do, the ‘laser pointers’ often look like lightsabers from star wars as you swing them around. Can they be put into destroy mode, where they become shorter like a sword so you can slash anything you want to delete in the scene? The developers behind Fantastic Contraption implemented a fantastic ‘balloon popping with sharp pin’ mechanic to delete items out of the world. Stuff like swinging your a lightsaber to destroy unwanted content just makes the experience more fun and enjoyable.

Painting Meshes
What about planting meshes like trees or rocks everywhere you point and click to add them. perhaps this gets into world building tools a bit. Would feel better than having to alt drag copies into existence when you have forests to plant.

Multi-Select
What if the tip of your laser pointer could spread out into a cone shape with an adjustable radius so you could select multiple objects at a time and then manipulate them as a group. So the laser pointer becomes more of a flashlight with variable falloff. Maybe the green sphere at the tip becomes a hoop shape surrounding the desired objects. Only objects completely contained in the cone would be selected this would help unwanted meshes from being selected. Also you could center your selection on the furthest object to be selected to help define a distance range for the selection.

Navigational Bookmarks
Would like an easy way to ‘bookmark’ the user’s location/vantage point/world scale to presets, floating markers or voice commands, to jump back and forth from several vantage points while working, so you don’t spend unnecessary time navigating. I found this to be super useful in 3D software like Sketchup, it saves so much time and is also good for creating guided presentations for others that are evaluating your work. There would of course be animated transitions from one vantage point to the next to make it feel natural, like teleporting I suppose.

Building Composite Objects
The ability to construct composite objects, using grouping, parenting, or physical constraints such as pins or pivots.

Physical Sim
There’s probably lots that could be done with physical simulation. designing in VR without some kind of physics or constraints might actually impair good design decisions as well as make the environment feel too magical and not grounded.

Assistive AI
In the future, I would like to see some of the new generation of AI (deep learning) used to assist the user in manipulating the world in logical ways. For example, currently computers don’t recognize when an object is oriented in illogical or impossible configurations, such as a tree standing upside down or growing perpemdicular out of a wall. The AI could assist the user in correcting these problems. AI might one day lower the learning curve for artists by intelligently assisting them in selecting and using tools, balancing compositions or taking over and performing repetitions tasks. This kind of assistive technology is in heavy development today. VR seems like a perfect place to use this new technology.

That’s so cool!!

Rather than flattening and constraining nodes to cube, blueprint could be just 2d that we can pan, zoom, move freely in any direction. And have multiple blueprints visible, each on its own 3d layer that you can move about.

So while working on one function you can see sub-functions behind it. Why not link it too, so you see where your functions are called from.

Native VR Development

After reading a number of negative comments left on the “Build for VR in VR”](Build for VR in VR - Unreal Engine) blog post, I began to wonder if creating value for game/level designers is the way to go. Perhaps all they want is to streamline their workflows with as little energy, time, money and bugs as possible. Maybe, they don’t want to wave their arms around in VR space for hours and hours. I wonder if editing in VR would be more attractive to less skilled wanna-be content artists, in the same way blueprint authoring was meant to invite artist-non-programmers to develop their own game logic.

It seems Unity believes this way and has been working on their own plans with a standalone Native VR Development Environment, that doesn’t necessarily strive to create value for traditional game developers. I think this is a good discussion topic if anyone want to pipe in. you can read more in the following article:

UpLoadVR Article: http://uploadvr.com/unity-native-vr-announcement-exclusive/

Yes – this is an important topic. We talk about this a lot here.

There are certainly some approachability wins to using VR to learn how to layout levels and build worlds (compared to traditional CAD-style controls), but that is very much an auxiliary benefit to us. We’re aren’t doing this to make it easier for newcomers, we want to help VR developers build games more efficiently, while also exploring new methods of interactions with game worlds.

I don’t think that anyone really expects developers to suit up for hours at a time while editing content on current VR hardware. That is more of a long term possibility, and this is very a forward-looking project. The idea is to get in here on the ground floor and discover what works with the community. However we already can see with our own VR developers that being able to interact with the editor’s world while in VR (rather than just previewing the game) is really quite useful, even when limited to inspecting and tweaking for minutes at time.

Creating an entirely separate editing environment to allow users to tweak levels in VR might not be a good long term offering. Everything gets implemented twice. Once for desktop users, then again (presumably stripped down and streamlined) for VR editing. We don’t really want two separate entire user interfaces, and I don’t think we can effectively maintain two without compromising one or the other. Instantly switching back and forth between the regular editor and UE4’s editor in VR is really nice too!

There will of course be some VR-specific features we’ll need, but overall I’m hoping that exposing existing editor functionality to VR gives developers the most flexibility. Plus I’m very much expecting improvements to the regular editor user interface as a side effect of our VR work. The UI will become cleaner to make it easier to interact with in VR, new in-viewport gizmos will be made available to regular editing modes, and we’ve already been improving features like WidgetComponent that will benefit even non-VR developers. It’s clear that a singular product is at least worth a shot.

Hi , thanks for your thoughtful post. I’m really excited about where UE4 is heading. I don’t think I was really suggesting a stand-alone platform for VR developers. I’m curious however, why Unity thinks that’s the right way to go. Perhaps they think they can explore a more appropriate UI for VR than bringing in legacy 2D into the virtual world. In the article they discussed using a modular “card” and “table top” based UI in VR as apposed to traditional UI controls. I suppose they want to focus on a made for VR authoring environment that uses intuitive “real-world” metaphors.

This kind of got me thinking about scripting the editor with Blutilities, custom UMG widgets and live interaction scripts for VR. Something like this might open up VR’s potential to the community and create a “VR Authoring Tools” category in the marketplace so that everyone could get in on the “ground floor” and find new and interesting ways of deploying the editors ability in VR, without requiring hard-core programming effort on everyones part. Maybe you have something like this in mind already, a modular highly configurable system thats not so hardwired deep in the engine. This would allow folks less interested in game design to leverage UE4’s architecture for their own creative purposes, like Oculus Story studio is doing.

Interview with Sweeney about VR and VR Editor

Wow! Great article! It’s nice that explained more about how the VR editor might evolve over time and his vision of the growing importance of VR and the Engine. really explains a lot! thanks for sharing the link! I’m even more excited about the future and Epics plans to be at the center of it! :slight_smile:

Any information on how the HUD was done? I get poor results with UMG and attaching a 3d Widget to an actor. Thanks.

We’re using regular widget components, with only a couple of modifications. (We added support for presenting native Slate widgets, as well as exposed some features to C++ to allow for fading and other things.) The way input works is obviously very different, because we’re not routing regular mouse events. I’m not sure what problems you’re running into, but feel free to post on AnswerHub and paste the link here and I’ll help get your question answered.

Hi , I just saw the announcement from Leap Motion regarding their new Orion hand tracking system. Looks so much more accurate and smooth compared to their old system. I wonder if they’ve achieved enough precision to be useful in Unreal’s VR editor as an optional input source. Ultimately, I’d rather use my hands to build in VR as long as its not a frustrating experience. I’m also impressed by their use of ‘Audio Haptics’ in there interaction engine, to help supply the brain with better feedback. Do you currently use some kind of audio feedback in UE4’s VR editor?

Really like the demo ‘Blocks’ that they made:

Here is a comparison of the old and new system:

We just tried the new LEAP runtime here for a while and had a lot of fun with it. Definitely a huge improvement over the last version. I think there is a good chance we would try to support it in the future, after we get a little further along with motion controllers first. One problem with using hands for selecting objects is that you need to “pull the trigger” with your other hand (either by making a gesture, or mashing a spacebar, etc), so that you don’t throw off your aim with the pointing hand. It is a bit clumsy, even when the input data is accurate. It could be really cool for sculpting or even for typing, though.

This is how i want blueprint to work and look in VR:
e9f5b5d8e622f023e207b661435053a58c410e45.jpeg

Think of Tiltbrush mixed with blueprint to create a freeform blueprint graph around you in the vr space

For my own health I think it would be fantastic to ditch my chair and mouse, and use my body and get some real exercise while building a level.

I have a few thoughts.

**Using my hands **
Your back neck and wrist muscles will react to the finger clicking action of the laser pointers the same way they do to a mouse. According to my physiotherapist, it’s this clenching that crates the tightening of the muscles that results in shoulder, wrist and back pain. I would love to use my hands where possible or at least have several different forms of input, avoiding repetition. But using hands would be the best.

Moving around
It might seem a bit silly when you can just fly around a world but I would enjoy using an exercise bicycle to move around large levels. And when they’re practical, a Katwalk

Interesting news…

May I ask one little question that’s appeared now? This summer I’ve chosen UE4 for my 3-year project wich contains in-game building as one of the modules of VR-platform. I mean it sounds like I have functionality very similar to announced Epic VR editor. Certainly, I develop different UI, some functionality is more simple for users and there is much more additional things, but in fact I think we rush to one point… The creation of the first true massive VR.

So what should I do now? Join your team directly or keep trying to make another view to the VR content making?) I mean it’s your engine, guys, and what about licence issue in this case?..

Hi Two-Faced, I think the difference between what you are developing (In Game VR Editing) and what the Unreal Team is developing (In Editor VR Editing) is subtly different. In UE4’s VR editor you are actually building with Editor tools in Editor mode. With your tools you are building live inside game mode where you don’t have access to Editor specific functionality. The line between these two modes can become blurred if you build your own tools inside your VR executable that can be complied and run independently from the Unreal editor. There is nothing that prevents anyone from building these type of tools. But bringing the Editor itself into VR is what they are doing differently. So I don’t think there is a duplication of effort, its all really good for VR exploration! The more VR authoring tools/ ideas that exist the better for everyone! :slight_smile:

That’s an interesting point I hadn’t considered. Would it throw off your aim if you pointed at something with your index finger like a laser pointer, then make a trigger gesture with a quick bend of the thumb on the same hand? I seem to be able to do it naturally without affecting my pointing finger. Maybe I should strap a real laser pointer to my finger and see if this is true.:wink: Its a natural gesture that even kids use to fire a pretend gun. Even if your aim point shifted suddenly when you trigger with your thumb, they happen concurrently, so it seem like it would be fairly easy to get the aim point just before the trigger action.

Secondly, I wonder if it would be better to restrict selecting with “laser pointers” to objects that lie outside our sphere of reachable things, and require direct finger contact with objects that lie within our reach? It just so happens, that our brain dynamically classifies objects in this way, existing in reachable and unreachable space. Seems like this might be a good interaction model for VR. One tricky part is how you might switch between direct and indirect section modes. Perhaps you have to turn on your ‘laser pointers’ by laying your thumb against your index finger, and split them apart to make a selection.

This is a lot of fun to think about, but I understand your need to focus on touch controllers for now! :slight_smile: