Hi, i was just trying out the VR level editing in unreal and found it to be just as time consuming to use as unreal’s normal transform tools and wanted to suggest a new way of doing it.
First off, lemme describe what I’m trying to do : I have a bunch of small foliage that i have to put on craggy rock surfaces and every single one must be placed by hand because the plants are like a grass plant where you must make sure every root is going into the rock or you’ll see how fake it actually is because the grass is floating in thin air and you can see the roots terminating in nothing.
So, what is difficult about placing them with the normal unreal transform tools :
A : they must get EXACT rotations and trying to do that with world space rotation is pure gimbal lock nightmare. To do this in a 3d package like modo would be super easy because they have a spherical rotate tool that’s very similar to how a trackball works and you can get a rotation on any axis by where you place your mouse. unreal unfortunately doesn’t have that.
B : they also must get EXACT placement as well and that’s very difficult because you can’t move objects in screen space easily. yes you can move left and right in screen space if you drag on the tiny center of the tool handle but you can’t move objects forward and backwards in screen space.
So, to get one plant placed properly can take up to a minute with unreal’s transform tools, as surprising as that sounds with the vast majority of the time trying to get the rotation i NEED…
So I just had to try out the VR tools to see if they were any better and the result is yes/no. Yes you get the 3d movement you expect, but the rotations are still very difficult because you have to use the two laser rotate tool and let me go over it’s problems :
A : (IT TAKES TOO LONG TO GET AN ENTITY INTO A ROTATE STATE) : to rotate an object, you must click on it with both lasers and that’s very difficult on foliage because the meshes are 90% air and so the majority of your clicks miss. if you try to fire two clicks on a plant mesh at the same time, your odds of missing the object and rotating AND moving the object behind it are about 90%!! One trick I’ve found is to first half-press the trigger which only selects the mesh and doesn’t move it. So i select the plant first to clearly see it’s selection bounds. Then I click on it with one hand. Once one hand has successfully clicked on it, I now click on it with the other hand. I’ve found this is the only semi-safe way to click an object twice because if your second click misses the mesh which happens all the time, nothing bad happens because it doesn’t register that.
B : (GETTING AN EXACT ROTATION TAKES WAY TOO LONG) : look at your hand and give it a rotation in any axis of about 5 degrees. see how easy that was? it took no work at all. The problem with the laser rotate tool is that in order to rotate an object on a specific axis, you must fire the lasers at two positions on the mesh in aligment with that axis. That not only takes a lot of time to do because if you miss clicking on the mesh, you have to try again, but aslo it can be impossible because the area of the mesh you have to click on is too small, obscured by another mesh / tool handle / laser, etc… So trying to get the exact rotation you wanted can take up to ten actual rotations, each rotation taking anywhere from 0.5 seconds to 4 seconds. Which means you can actually spend half a minute or more just trying to rotate a mesh, whereas like i just showed, you can rotate your hand in 0.1 seconds perfectly, every time.
C : (THE LASERS ARE IN THE WAY) : one of the many problems with trying to get this object precisely selected / placed is that it’s partially occluded with lasers, making it harder to see what’s going on.
D : (THE TOOL HANDLES ARE IN THE WAY) : another problem is that you’re trying to do all these complicated clicks and rotations which is difficult enough as it is, you’ve also got unreal’s tool handles in the way as well… …and if you’re using the universal tool’s handles, it’s 3 radial rings around the mesh which probably obscure about 30% of it!! Thus slowing you down even more. I accidentally clicked on those handles many many times and found it was better to put unreal into the scale tool state because those cover less off the screen…
E : (YOU CAN’T SEE YOUR ITEM OR WHERE YOU’RE PLACING IT) : these objects i’m placing are small plants, and i found that the majority of the time i couldn’t even see where teh plant was or if/how it was intersecting with the rock behind it because you currently have two lasers hitting the plant, both of them casting red dynamic light. It ended up being where the entire screen was bright red and I couldn’t see anything at all because the entire screen was nothing but red… I assume this normally isn’t a problem if you’re scaling large objects but it was hideous when trying to place these small plants…
So, those are the five problems i have when trying to use vr to place meshes. Unfortunately I’ve found it takes about as much time to place the meshes with this vr tool as it does without… Sooooo, here’s what I recommend :
A new editing mode and it works this way :
You no longer use lasers or lights while actively using this tool. The way it works is that when a hand presses the button above the trackpad on the , it’ll apply the EXACT transform of THAT to whatever mesh(s) are currently selected. So for example, your hand is to the right of the mesh and you press the button and rotate your hand 10 degrees in screenspace roll. The mesh will match that exactly. it rotates around the 's pivot point 10 degrees. That’s as simple and logical as it can get. And as for why i recommend the button above the trackpad and not the trigger, it’s because i need a way to perform multiple edits to a mesh quickly without worrying about selecting/editing other meshes on accident. So the trigger should do what it normally does where i’ll select a mesh by half-pressing the trigger and then i’ll give it a number of move/rotates freely with my hand and the other button. Fast, easy, and safe. Lastly, when you’re transforming the mesh(s), no lasers, lights, tool handles, or even mesh selection edge highlighting should be drawn so you can clearly see where the mesh is in relation to itself and it’s neighboring geo and isn’t getting obscured in any way. So for example you’re seeing a laser by default. You then half-click on a mesh to select and see the selection highlighting. You then press the button above the trackpad to start the transformation and the lasers / lights / selection prehighlights / tool handles / etc are now all hidden. You move/rotate your hand to apply the transform and let go of the button and all the lasers/handles/selection outlines/etc come back.
So that covers how you move/rotate the selection. For scaling, i’d recommend that while the button above the trackpad is held down, if you press that same button on the other controller, it’ll then pay attention to the movement on IT and as IT moves up/down, it scales the object up/down. As for stretching, how about while hand2 is holding down the button AND then presses the trigger as well, it’ll display the scale axis handles around the mesh and as you move your hand to the right, it’ll scale whichever of those stretch handles that are in the most alignment to that vector positively and the opposite when you move your hand to the left. That way we can scale objects in as close to local screen space as we can… I’d also have it so that moving the hand right / up / forward to the user is always the positive scale axes so we can start moving our hand in those directions assuming it’ll scale the object up every time. If it weren’t that way, it could be that 50% of the hand movements to the right could be scaling the object down because the axis handle just happens to be pointing to the left and we don’t want to waste any time with visually checking the tool handle vector directions before we start stretching…
Anyways, that’s my suggestion for SAFELY transforming your selection as CLEARLY and QUICKLY as possible. Thanks for listening.
-seneca