[Question] Rendering a 3d model to the HUD

Hey, people!

I’ve got a task coming up that I’m not entirely sure how to handle yet, so I thought I’d come ask for some advice. The problem is this:

I want to render a 3d object (In this case, a bust of a character) to the HUD. I want it to be able to have a transparent background, and I want it to have its own lighting. For vizualization, the problem is almost exactly the same as rendering the friendly face of doomguy in 3d, if you were attempting to ‘modernize’ classic doom.

The functionality I want is to have an animatable, independantly lit, ‘green-screened’ model of a face capable of receiving input on my HUD.

How I would solve it if I had ultimate power is this:

I’d have a secondary render space (so, not on the current level) with the model I want to render, giving me complete control over lighting and everything as I want it.

Then, set a camera up and get that camera to render its view as a 2d texture (I know how to do this!)

Then, chromakey that texture… somehow… and render that to the HUD.

The problem is, I have no idea how to set up a ‘secondary level’ to render from short of just making a floating box far off the normal level’s bounds and setting up my model and lights in that contained environment. I also have no idea how to chromakey a texture in-line, and suspect it’d involve some C++ coding.

Thoughts? Advice? How would you approach this problem?

I haven’t tried this myself, but the general idea is to set up a render environment far above or below a level with a bright green-screen style texture surrounding the model, no lighting on this colour, just emissive. Point a camera at it and render it to a texture.

Then create a material with the render texture leading into a chroma function that will remove the green-screen colour and apply that material to an image widget in UMG.

Again, haven’t tried this and won’t be able to for a few days due to other work, so I don’t know if there are any potential pitfalls you may find. Hopefully this is a lead in the right direction.

I’ve solved it, using a combination of elements.

First, I have an actor object set up with the static mesh I want rendered to my hud, and removed as much under the rendering of the static mesh (Visible in Reflection, Visible in Ray Tracing etc) as I could find.

I then set that actor object up in my FirstPersonPlayerCharacter BP as a child of the camera, positioning it -behind- the normal view camera.

I then created a SceneCapture2d component BP, and set Texture Target to a new texture, set Primitive Render Mode to ‘Use ShowOnly List’, and set up the ShowOnly array in the BeginPlay event of that BP, using ‘get actor of class’ to add my actor object to the list of things it is allowed to render. I also made the SceneCapture2d a component in my FirstPersonPlayerCharacter BP, and made it a child of the same camera, and put it in position to see the face.

I then made a material, set blend mode to translucent, Material Domain to UI, plugged the RGB of the texture created from my SceneCaptureComponent into the Final Color, and a negative of the Alpha into the opacity.

Because the SceneCapture2d preserves the alpha from not having to render anything, I don’t need to chromakey it - I can just tell it what objects I want it to render and it’ll leave everything else off in the alpha channel.

Yeah! Still a bunch of cleanup to do for actual implementation (Still not sure if I want it off the level or attached to my character for lighting) but that’s the basic implementation of what I’ll be using!

Heck yeah! Glad you got it working, this will be fantastic reference for the future!