Mixed Reality Status update

Does anyone have a status update about when mixed reality video recording will be possible with UE?

Basically I’d like to know if there is a branch where the 2D window can be set to another camera than the VR camera. Last update I can find is this: https://twitter.com/EpicJamesG/status/740248829761597440?s=09

In a RoadtoVR article, it was implied that this would be in 4.13. Looks like we’ll have to wait until 4.14 now. No news yet.

4.13 did add a way to designate a motion controller as controlling a camera transform. Nothing with rendering or viewports though. Networking is unfortunately still the easiest way.

This pull request from Allar might get you part of the way towards what you want: https://github.com/EpicGames/UnrealEngine/pull/2444

I’m not sure if it is possible to set up a real viewport, or if it would need to be based on a scene capture component.

I eventually want something similar to what is available with Unity in that it spits out multiple layer so you can have objects in front and behind the camera footage. For the moment anything would be an improvement. Networking is hugely fiddly for an end user to setup and that’s the main use case for us.

I’m testing our this pull request at the moment. In the rejection of it Epic staff mention that they are implementing this at the moment but I’m not sure when that is going to be available.

https://github.com/EpicGames/UnrealEngine/pull/2727 it’s rendering a scene capture to the mirror window.

I’m just using this for making non-mixed reality b roll footage at the moment but it’s working pretty well. Performance is okay at least when using the Oculus forward rendering branch.

I’ve had a simple solution for mixed reality going for a while based on the hack I submitted above and some obs xsplit related tom foolery to sync the video feeds. My solution allows for real-time compositing on one machine but because of limitations in the post process system I can’t replicate what the unity guys are doing without major engine changes. The render to texture, post process and window control areas of the engine seem to be disappointingly under developed. This is a list of things that are holding me back from equivalent results.

  1. No alpha support for post process blendables. I know the plan is to replace the whole post process system at some point but having this would make support for render to textures much more acceptable.
  2. Capture multiple render targets using different post process settings from a single USceneCaptureComponent. When the post process is run there is a huge amount of data about the frame available but you can only output one texture as a result. To handle the output the unity guys are getting efficiently we need at least two. There seems to be a solution used in FCompositionGraphCaptureProtocol for this functionality but the USceneCaptureComponent code isn’t using it.

My plan at this stage is to build a new USceneCaptureComponent based on the code in FCompositionGraphCaptureProtocol as a plugin. I’d love to know more about what epic have planned though, I hate reinventing the wheel.

Hello, any updates about mixed reality support?
I saw this Everest VR - Mixed Reality Trailer - YouTube, and I’m wondering if they use built-in system

I really need of mixed-reality for a demonstration and I was a little disappointed because it’s not in the 4.14 preview. But, when I tried a search to github I found this :https://github.com/degica/UnrealEngine4- Somebody know if it works ? (My HTC Vive is currently and unfortunatly at my work…).

This code seems to take the same approach I used for drawing alternate render targets to the game window so you can capture them with obs. This one renders two render targets, one to the top and one to the bottom of the window. I assume these are used for foreground and background scene captures. The big drawback is the lack of alpha channel support in the render targets so the foreground can’t be blended perfectly onto the camera feed. Will work though.

Does anyone have a status update on this? It’s not in 4.14.

i’m patiently waiting

The Trello roadmap indicates that they will likely be working on this through December. Honestly, given the complete radio silence from the UE4 dev team, I would assume that they plan on making a really big marketing announcement about this around the holidays to help push VR.

I really hope so but have a bad feeling about that …

Anyway lets hope it can come close to competing with this:
http://owlchemylabs.com/owlchemyvr-mixed-reality-update-2/
That is absolutely brilliant!

I’ve been mostly successful in implementing mixed reality using this Spout plugin for UE4:

And TouchDesigner:

I’m sending a keyed texture from TouchDesigner’s SpoutOut node into a material instance dynamic in UE4. I’m then reading back a texture from a scene capture 2d (a child of a motion controller with a user defined transform offset) into a TouchDesigner SpoutIn node. The results are visually quite good.

The only issues I’m currently having are:

  1. The UE4 plugin seems to have some sort of memory leak with the Spout Receiver (no issues with the Spout Sender).

  2. I don’t know how to delay the game to match the latency of the incoming keyed texture.

No. 2 is much easier to handle if the composite is done outside of the engine. With the texture being composited inside the engine I need some way to add a custom amount of latency to everything in the game except the texture. I have no idea how to do that.

Does anyone have a suggested approach to solving this latency issue?

I’d also come very close to implementing a composite outside of the engine with 2 scene capture 2d components, one sending the tonemapped RGB and another sending a modified depth pass to separate foreground from background. Unfortunately that method doesn’t work at all with translucent materials.

Isn’t it enough to just write the results of a scene capture2d to the screen and use OBS to handle the compositing?

For that you just would need to edit the steamvr renderer code and add an additional option to draw a custom texture to the screen.

OBS is not a good compositor. I did have a fully developed system of compositing in post using TouchDesigner with occlusions and relighting but discovered to my dismay that translucent materials do not render into the world position or depth passes (which I was using to handle occlusion and relighting in TD).

The benefits of ‘compositing’ in engine are quite nice. The player is lit by game lights and even reflected and refracted.

I see and its nice solution. I was just suggesting an “easier” solution that works too. As far as i saw in other Mixed Reality videos they had their background set up properly wit green background and stuff. For that i’m pretty sure obs or other video streaming software would be a good fit. I was unable to test this properly due to the lack of a proper camera.

Also this feature is often used for trailers and youtubers, so they might be able to afford these required assets for such a video :slight_smile:

Can you please post a picture or a short video about your solution, i am interested in the results you’ve got :blush: :o wondering how different it is with reflections

I’d also love to see how well lighting in game works. Do you delight the incoming camera feed somehow?

FYI now labelled as Feb.: Trello
lets give it some more upvotes!