There have been some really bad decisions by epic regarding collision and its support for VR.

I am currently trying to do some Vive VR locomotion experimentation in blueprint and have run across a massive issue with how Unreal handles component collision. The issue is that UE4 forces you to have your ONLY collision component be the root of a pawn, character, etc and any child collision components are ignored completely (when it comes to static meshes). This isn’t a problem if you are making most traditional games where the character cannot venture away from “center”, but if you want to do room scale VR stuff, it becomes a disaster.

Want to mix traditional controls and room scale? Well you can’t because the root for the Vive has to be a scene component (not a capsule) and even if it could be a capsule it doesn’t follow the player as they walk around in the room scale environment. This is by Valve’s design and would not be a big deal if you could add a freaking capsule to the camera and have its collision work. But you can’t, because epic doesn’t allow collision to work on children. This means that if your player has moved off the center point traditional movement will only collide based on the middle of the root space (if there is a collider there), which in turn means players can travel through walls to an extent.

This also means experimental locomotion, stuff like jogging in place + roomscale, leaning + roomscale, ect, wont work unless you force your players to move back to the center of their play space every time they wish to move. You literally can only use teleportation + roomscale which is terrible considering what I want to do would be brain dead simple in Unity where I can add a capsule collider wherever I want on my character and the collision will just work.

I love UE4, the workflow, the rendering, pretty much everything about it, but I may not be able to use it because of this. I just don’t understand why we cannot select which collider, or colliders, we want to collide with the world. Its like VR completely caught one of the biggest and best engine developers out there completely off guard.

Hearing Mr. Sweeney, someone I have the utmost respect for, speak about his vision for VR got me excited to delve into it. Guess I am just a bit frustrated that the engine is actively working against me. I really, really don’t want to go back to Unity :(.

The character actor by itself is definitely not set up for the more advanced aspects of VR. I’ve worked around this problem by separating the player into multiple actors, where the headset and motion controllers are on a direct subclass of APawn with no movement component and using a separate ACharacter to provide only the standard capsule movement. I have to manually manage how these two actors interact with each other, but it does allow much more fine grained and flexible control. For example, I use a custom movement method where the player presses the touchpad and it spawns a character capsule that they can move around using the touchpad. When they release the touchpad it teleports their current headset location to the character location and the character is destroyed. This method can be seen in Spell Fighter VR, and a more advanced system can be seen in this video. I guess the best advice for VR interaction is to make things separate and modular. You can’t make a gun by playing animations on a skeletal mesh attached to your camera any more, it needs to be a separate actor with interaction and physics all of its own.

Wouldn’t it be kind of weird to have your VR self collide with the game world when you can’t collide in the real world? Or is there some other thing you’re trying to do?

I found the default VR implementation to be incredibly problematic (write-up of experiences still in progress), but in our case we had no problem with collision. I can see where you’re coming from though; if the component is just offsetting, you’re going to need to work around it. Our problem was more related to the motion control component insisting on being glued to the center the screen, which is just plain wrong, and it got really awkward when it was lagging a few frames behind.

I have my collision volume on the camera and it works fine. To prevent players from going through the walls, just teleport them in the last valid place everytime they try to clip the walls.
There is only one true problem right now for VR. Ue4 has no audio panning.

Yeah I am extending from Pawn, not from character. My root is a scene component and I set the root/players z position every tick via a line trace from the camera’s position straight down. This allows people to walk up stairs or ramps naturally. I can add gravity if the “ground” isn’t found as well.

The problem is when I add a capsule to the camera(or the root) it will only collide with dynamic objects and allows the player to go through static ones. I am not using the character movement component at all.

I have played spell fighter and I am familiar with your implementation. I describe what I am trying to do at the bottom :slight_smile: . I have also made a weapon system that works as you describe, everything is its own actor. The problem is if you attach a weapon to your characters hands, the most peformant method, that actor loses its collision against static meshes as well because it becomes a child.

I really don’t understand why epic thought this was a good idea. If a developer whats a child to collide it should collide with everything, including static meshes. There is no reason for it not to, particularly since it works this way in a lesser engine (unity).

I only allow collision when using traditional movement. While walking around in room scale any overlap gets teleported backwards to prevent walking through walls.

Are you doing a stationary root position for room scale? Stuff like hover junkers should work pretty well even with the current implementation.

This works well for room scale, but can you imagine trying to navigate a tight hallway using traditional controls and being teleported away from the wall every time you get close. That would be aggravating. This may be the only solution I have though.

For those curious, I am implementing a locomotion system where walking/jogging in place decides A) if you should move, and B) what velocity you should move at. I combine that with leaning for directionality for fairly natural locomotion that reduces motion sickness.

I am just a extreme Noob, but my setup is quite similar to yours. I do as well use Raytrace to compensate for terrain height. I fixed the collision box problem in the same way. I use the left motion controller direction and trigger for speed and forward vector without the Z component. I use the vector to cast a raytrace 100 units in front relative to my cameras origin and decide via a branch if my movement component gets fired in that direction or not depending on if the trace returns a hit or not. Since the control is quite similar to your leaning approach, I assume that could work for you. It works like a charm and climbing stair etc still works as well.