Rendering Techniques/Features That Are Specific To AR Content.

I’ve been looking at the current ARKit/ARCore functionality and I can see where the 4.19 version is probably going, It’s all looking good it’ll be nice to have most of the functionality under one unified interface.

At this stage I thought it might be good to have a thread that outlines some of the more challenging rendering techniques or features that are specific to AR content. I’m going to list a few techniques I’ve used in the past with ideas about how I might go about implementing them in UE4.

Simple Pass Through Geometry:

  • In the past I’ve done this by rendering the camera image as a background and the pass through geometry as depth only. Not a match for UE4’s pipeline.
  • Currently I’d render the geometry as an unlit material with a copy of the nodes used in the current ARKitCameraMaterial. I just hope the extra scene-graph and vertex processing complexity is insignificant considering its more efficient from a frag fill point of view.
  • Could use the custom depth map to control the ARKitCameraMaterial but it seems wasteful.

Pass Through Geometry with Shadows:

  • This is where the fixed function PBR pipeline hurts us. One way would be to add a new shading Model called Prelit or something that would allow us to break away from physically real lighting and blend unlit with shadows augmented by the ambient light colour and intensity.

Pass Through Geometry with Static UV’s:
This is for when you wan’t to move your geometry and have the bit of camera mapped to a polygon move with the polygon.

  • One approach is to calculate view projected UV’s in the vertex shader and pass them to the pixel shader through the vertex interpolate node then animate the polygons through the World Position Offset parameter. Works well but means all your animation must be driven by the material shader and that can be complicated.
  • Write some code that uses ProceduralMeshComponent to grab the polygons from a static mesh asset and then generate view based UV’s for the frame 1 locations of the polygons every frame. You’d have to record the frame 1 transforms for the meshes if the animation was object level and I’m not sure what if it was skeleton based. This is an area that could do with some engine level planning.

Pass Through Geometry with with Static UV’s and Frame Grab:
This is related to the above but has the added element of grabbing the camera frame at the start of the animation to preserve the exact pixels at the time of the effect.

  • You can use blueprint render to texture to render the camera frame into a texture and then simply switch the material on the animated mesh to one that uses it instead of the live camera feed. Not sure if it’s more efficient to switch the material or have a condition in the material but I think its the former.

With these features you’d have almost full freedom to manipulate the real world in anyway you could imagine not just place objects into the environment. I’m hoping by making this list other people will put their ideas forward and we can start to boil the problems areas down and get a clear idea of the best approach to take to solving some of the challenges in producing good AR content in UE4.