AR Mesh Occlusion Solutions

Im playing around with the AR template and was wondering if anyone has come up with a solution for hiding meshes that have been placed in the world in another room. I don’t want my meshes to follow the player through walls. I want the surface planes detected to be stationary in real world space. Any suggestions?

I haven’t tried AR Kit but do you get a depth buffer from the camera? Could you use that in a post-process material to draw the camera feed over occluded objects?

@Antidamage I’ll look into that thanks dude.

@Antidamage What you’ve mentioned makes perfect sense, but I have a question for you. How would you determine what to cover up and when to do it? Based on GPS data? Based on phone position from the detected plane surfaces themselves? I want to be able to place things down or have things appear in the players current room and play space and then have them leave to another room and interact with other objects. I want the objects in the first room to be occluded and remain intereactable for when the player returns to them.

Right now on the AR template It appears that the detected surfaces that objects can be placed on are following the camera. For example if I place an object on the ground and it appears 5 feet from me on the floor. If I walk into another room 10 feet away the object still appears 5 feet from me. I’ll test again to make sure this is the behavior i’m seeing.

If there something I need to do to detach the camera from the world origin? I feel like logic would state that i’m not actually moving around in virtual space. I’m just standing at the origin in virtual space and when I move in real world space all i’m doing is changing the video feed from the camera. Does that make sense?

@Antidamage I was just retesting this… It had been a few days since I looked at the results i was getting. What I mentioned above is incorrect. I am moving away from the objects and they are staying in place. The problem is exactly what you stated with the camera buffer. Thanks for the heads up. I’m currently tackling the haptic feedback, but once that’s done I will be onto this I think.

As I mentioned, can you get a depth buffer from the camera? If so it’s just math inside a post-process shader:

Lerp(PostProcess0, CameraFeed, clamp((SceneDepth - CameraDepth) / TransitionDistance));

Given that ARKit plots the surfaces in general I’m sort of surprised that it isn’t building some kind of occlusion mesh to do this automatically. If so it might live in (or can be pushed into) the custom depth buffer, which would be even better than the camera depth feed if such a thing exists.

hi guys, anybody achieved this stuff?

Hi,

yes, I had good results with a procedural mesh created at runtime, based on tracked geometry.
The vectors from the tracked geometry are fed to the procedural mesh creation. It then needs to be triangulated. I calculated that in blueprints, performance was no issue.
Then apply AR camera material on it. It occludes and can receive shadows.

Alternatively just get the extents of a tracked plane and scale a plane.

This approach might become obsolete in UE4.23. It finally implemented mobile AR in the MR MESH COMPONENT.
We’ll see what it can do. The UE4.23 preview version 2 is still too buggy and it does not package, no chance to test it.

Mind, that the tracked geometry of current generation mobile AR is not very accurate. Recognizing a wall and the edges of an open door is hit or miss, especially on lower end devices.
2020 Apple will likely implement accurate depth measuring in their devices, less depending on enough light. Similar to their current TrueDepth Camera technology in iPhoneX and iPadPros, but reaching several meters, not like currently only some cm for close face detection.

I had set this aside many months ago, because I was getting a ton of mac specific development errors. maybe its time to pick up back up.

This can be a solution

Hello,

Try to check this box in your project settings, rendering section :

In your session config, apply those settings :

It worked for me, hope it will help your

1 Like

it’s work only in Android with DepthApi. If you try use it in IOS with lidar than you must edit some code in unreal engine, At least that’s how it was with me.

Hi Cektantwork, can you explain me what part of the code did you change? I work with IOS too, in blueprint and I really can’t find how to make the depth work … :confused:

Hey , can you please explain what part you did for iOS for occlusion? It would be great help .
Thanks in Advance