Mixed Reality Capture Feedback for Epic

To support the efforts of XR devs everywhere, we’ve been building a solution for compositing real-world video onto virtual world space in Unreal Engine 4. Mixed Reality Capture (MRC), available in Early Access as of Unreal Engine 4.20, equips you with the tools you need to project yourself (or any tracked object) into your virtual experience. More information in the 4.20 release notes.

This thread serves as a space for us to get feedback from you! Please help us make the MRC tool the best it can be by sharing ways we can improve upon it.

Looking to learn more? Check out our MRC documentation pages.

As I was saying here:

I have trouble to get any feedback on the available tracking attachments. Is there anything to do to find the tracker?

Thanks.

Thanks for the amazing feature Set. We will surely leverage in one of our products.
One thing that’s burning under our nails however, is the ability to write out video to disc from within the engine. OBS is fine, but its rather limiting the performance of a system and it does not let the unreal application control when and where a video is being recorded and written to disc.
We are aware of things like Nvida Shadowplay but the control we would have from within the engine, to support nvidia, ati and intel chips to encode mp4, would be just great.
are there any plans to support such a feature in the future?

sincerely

Having the exact same problem.

@GJ and @IntervideoFilm: Thanks for reporting this. I’ve repro’d here, and reported the issue to Dev. No word on a fix, but it’s somewhere in the work queue.

I bought a new capture card and tried a new feature of UE4.20, mixed reality, but my capture card(AVermedia product) does not seem to be supported.

Is there any other way or should I just wait until you support? Do they have any plans to support other devices?

Thank you for the MRC! Timing couldn’t have been better for us. We (the company I’m working for) are making a Mixed Reality application using Unreal Engine for an exhibition coming in October. We started with our own solution for the MR feature, but the early access version released with the UE 4.20 is much more advanced than our own tries and will save lots of work hours. We were very pleased to see how the calibration has done. It allows us to use multiple cameras or computers very easily. There is still room for improvements though.

We are using a professional videographer in the project. MRC’s current device support list is understandable (at the current state), but makes things little bit trickier. Luckily the only supported video capture device is sold here where we operate and we also have one of the supported webcams at the office. We plan to test a bunch of webcams and video capture devices in coming weeks with the said professional videographer, to find the best and most stable professional solution for the exhibition. UE 4.21 is too far in the future, we have to go with the 4.20 EA version of the MRC. We will definitely need to have a camera with a tracker, so it’s nice to hear that the “No Tracking Attachment” bug is in the work queue.

I’m working full time on our project as a developer and I’m happy to help in any way I can. We are currently using Vive Pros with SteamVR 2.0 base stations and 2018 versions of Vive Trackers. Currently we have tested the MRC only with a Logitech C920 webcam.

Any news on the vive tracker bug? We would really benefit from that feature and we have to choose in coming weeks if we use MRC or not, or if we use it only for stationary cameras. I would join the bug hunt by myself gladly, if it would help. Any way to get the actual MrcCalibrationTool project to test, not just binaries? Signing a NDA would not be a problem, if it has proprietary code.

We tried the only supported video capture device Magewell USB Capture HDMI Gen 2, but had some hickups with a professional level camera. Videographer tried two different objectives with different settings, but Lens Calibration phase was the issue and Reprojection Error was huge when we tried any wide angle shots. Only time when we passed the lens calibration was when we used a close-up settings. Not there yet, but Mixed Reality looked very good when a proper camera was used instead of a basic webcam.

Being able to hide myself partially behind a virtual object was really nice and useful.

I am using Magewell USB Capture HDMI Gen 2 & a Sony A7S for the video capture. But the MRCalibration Tool did not recognise any camera input. Is there anything i missed?
I am also using HTC VIVE. I have checked the video input is ok with OBS. Please advise. Many thanks.

Having to Tracker problem here too. Thats a mendatory feature for us. Please any news ? Big art documentary showing MR at stakes ! Many Thanks

We tested the calibration tool again few days ago, but the lens calibration phase is still a huge problem. We had two Magewell USB Capture HDMI Gen 2’s and two PC’s for the test. We tried two cameras (Blackmagic Pocket and Panasonic GH5). We wanted to test two setups, wide angle from 2.5 meters on downward angle and a close-up. We tried three different objectives (7-14mm Olympos Pro, 12-40mm Olympos Pro and Sigma 30mm). We tested almost all combinations of those two cameras and three objectives and spent hours on the first step of the calibration - lens calibration. There must be a bug in the calibration or it’s doing something that we are not expecting.

We just couldn’t get past lens calibration, we tried everything. Reprojection error was always huge, regardless which camera and objective we were using. Then we found a hack and we were able to go pass through lens calibration. Lens calibration algorithm should only look for the checkerboard, am I right? Background should be irrelevant to the algorithm, or so we thought.

We were unable to pass through lens calibration every time our camera was pointing to our green screen (huge one, filling the whole image). We rotated the camera to other side of the office and made the lens calibration against a background full of straight things (office doors, ceiling structures etc.) and it passed with low reprojection error. Then we rotated camera back towards the green screen and it failed again. We tested again on different camera and objective and the result was same. Every lens calibration failed when using green screen background and passed when we rotated camera towards the other side of the office. For some unknown reason the lens calibration doesn’t like our green screen (3 walls and floor, fabric - not stretched, has some wrinkles).

So, the hack to get the lens calibration working was actually easy. It seems that the camera transform is not saved first, so it was possible to make the lens calibration phase against the office background and then rotate the camera towards the green screen and finish the calibration on the final transform.

An updated version of the MRC tool](http://epic.gm/mrccal)has been uploaded - this should fix the ability to toggle between devices.

An updated version of the MRC tool](http://epic.gm/mrccal) has been posted - this should fix the ability to toggle between devices.

Thanks for the info and big thanks to devs too! Tracker is working now and our show will be much better because of this feature.

We went to an agriculture exhibition with MRC and didn’t had any real problems with it. Calibration didn’t work against the green screen again, but we were able to circumvent that (read the earlier post).

We used MRC to bring a real field (about 2 hectares or 5 acres in size) to the showroom. A 3D model of the field was made from drone images (hence the bad resolution). We also had lots of data from past summer, including index layers (NDVI) from satellite images, a 3D model of barley made from a real plant (with photogrammetry technique) and some farm machinery (from manufacturers’ CAD-models, converted with Unreal Datasmith). We also had a real weather station in our virtual field in the presentation and a real (green!) plant and managed to keep it visible in the videofeed. We used real data and models made from real things almost exclusively, nothing major was made by artists. Videofeed in the big screen had two sources: mixed reality using the external camera and presentators view from the headset (with virtual hands). Director switched between those two video sources based on what was happening in the presentation.

The show started with a small introduction speech made by the presenter (industry specialist) and then he literally walked to the virtual field and put headset on. The presenter presented lots of information about cultivation methods and results from past summer. Viewers were mainly farmers and we used Mixed Reality as a presentation tool, not a plaything. That large screen was visible through three exhibition halls and worked as a lure to bring people to watch some educational content. The pillar of earth shown in the image is actually a hole in the ground of the real field (there was four holes/pillars in total). We just inverted the basic idea and used real images from the hole as texture for a pillar that ascends from the ground. There was also a panel discussion at the end where participants were standing in the virtual field. A single presentation lasted about 40 to 50 minutes and we ran it total of 10 times during three exhibition days. There was no issues related to Unreal Engine or MRC. We had to scale our presentation down (to one person and one camera), because we fried one of our PCs, but that was just a hardware issue.

EDIT: The carpet was standard cheap and thin exhibition carpet in light green color and glued to the floor as usual, nothing special. We had exactly same carpet in other areas, but in different colors. We had tested a small piece of a carpet at the office in advance. It seems that the MRC works well even with a carpet that has some grooves.

This is what happens when Unreal Engine, Unreal Datasmith and MRC are used outside of the gaming context. Thank you Epic for great tools you made available, this was a fun project!

I loved how easy was to integrate this on a project for a exhibition, but I’m having a bit of trouble trying to configure things to get better performance, I would like to be able to change resolutions and maybe other quality settings, (I havent’t tried using with the ndisplays, but running the vr in one pc and the ar on another would also be great)

I now understand that the Spectator screen is the feed used for the MRC, but the documentation on spectator screen is not very helpful for a relative newbie like myself :slight_smile:

I still don’t understand how to allign my map/scene to position the camera (spectator) right were i wan’t it to be.

From what I could gather, the camera is put in place in the calibration tool so the player is show exactly where the pawn is in the game , so the only way to align the map to the camera is to physically move it and recalibrate, or you can use a tracker on it, which I haven’t tested yet

Thnx for the reply vshade.
hm yeah well ok the map isn’t loaded till after the calibration. But what i’m reading from your comment is that the Pawn position and orientation, at least in part, is responsible for the scene setting,
So fiddeling with that, would give me control on X and Y camera position and horizontal rotation?
Fysical camera position could give me control on Z position?

eh.cin very nice work! could you expand how did you manage to make it work? thanks in advance!