Announcement

Collapse
No announcement yet.

Mixed Reality Status update

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    I attached the spectator camera to a Vive controller. Then I held my smartphone just above the Vive controller and mixed the footage in adobe after effects. I'm making a better video today!

    Comment


      I am not sure about the difference between using "Spectator Screen" and "Mixed Reality Framework plugin", can anyone tell me about this?

      Comment


        We have created a mixed reality video for our trailer for "VR Shooter Guns". We use a custom 4.14 engine build and a Vive tracker. However, we put the footage together afterwards (not in real time):

        https://www.youtube.com/embed/OvHBCzdlIT8

        VR Shooter Guns - Arcade Shooter for Vive
        Unreal Meetup Franken - Unreal Engine 4 Meetup
        Hands for VR: Basic - For Vive and Oculus [Marketplace]
        Hands for VR: SciFi - For HTC Vive and Oculus Touch [Marketplace]

        Comment


          I posted this on a thread about spectator mode:

          We had our own code to output a spectator cam to the main window a year ago and we used OBS to capture the window and combine it realtime with a green screen camera feed. If you do want a realtime result you need to run the game output through a capture card to enable OBS to delay the footage enough to sync with the camera footage. A one system hack way is to use xsplit to rebroadcast the game capture as a directshow device which OBS can then delay like it does with capture cards. For best results you need to render a foreground only view as well to composite on top of your green screen footage. The ideal setup is two machines and two low latency HD or 4k capture cards, one machine to run the VR and spectator views and one to capture and composite the result. That way you have enough guts on the capture machine to composite the video realtime on set and store the raw streams for hi quality compositing in post.
          I would stick to OBS based mixed reality because you can sync the video and game footage. Although having streamed video code in engine is definitely a good thing.

          All this reminds me I have some improvements to the sceneCaptureComponent that allow you to capture multiple textures off a single render. It enabled us to get the foreground and background textures without the extra over head of two scene captures. I need to update it to the latest engine and submit it.

          Comment


            Originally posted by Joti View Post
            I posted this on a thread about spectator mode:



            I would stick to OBS based mixed reality because you can sync the video and game footage. Although having streamed video code in engine is definitely a good thing.

            All this reminds me I have some improvements to the sceneCaptureComponent that allow you to capture multiple textures off a single render. It enabled us to get the foreground and background textures without the extra over head of two scene captures. I need to update it to the latest engine and submit it.
            Have you managed to submit your improvements to sceneCaptureComponent ?

            Comment


              Originally posted by emretanirgan View Post
              In case anyone is wondering, with the latest 'Spectator Screen' changes it should now be possible to do proper mixed reality capture without the need for a custom plugin. I'm still playing with it and I want to write up something once I test it out, but here's how I'm doing it so far:

              - Have 2 SceneCapture2D Actors in the scene. Each one renders to a different Render Target. One will be used for capturing just foreground (what's between the headset and the SceneCapture2D), and the other one will capture the whole scene. You can use something like the green screen shader that was mentioned in this thread for the foreground capture object. And you can check out this thread to figure out how to set the transform of the SceneCapture2D Actors to be the same as the 3rd Vive controller transform. (Since Oculus also announced support for additional tracked controllers, it should also be possible to use the transform coming from a 3rd Touch).
              - Set your spectator mode to 'Texture Plus Eye', and have the eye take up one quadrant, while the texture takes up the whole view. Have the eye draw over the texture.
              - Create a PostProcess Material that combines two Render Targets into one image. I can post my material nodes once I'm done testing the whole thing, but basically set the UTiling and VTiling of your TexCoord[0] to 2, then sample the two Texture Samples and combine them so each one takes one half of the resulting image.
              - Draw this material to another render target on Tick and set that to be the Spectator Mode Texture.
              - Either capture the individual quadrants in OBS and composite them so that the real camera is sandwiched between your foreground and background, or capture the whole window, then do your editing and compositing in another software like After Effects or Premiere. I'd prefer the latter if you don't have to stream your mixed reality footage, since you get a lot more control over the final result.

              That should be it! I don't know if this is the most optimized way to do it, but I just did a simple test on my Oculus where I had two screen captures in my scene capturing every frame, and it was comfortably running at 90fps. Whereas with the plugin that was mentioned in the thread over the past couple of months, I had lower res render targets with scene captures that weren't even running every frame, and most of the time I was at 45 FPS on Vive.

              I'll try to test it out and write up something within the next few weeks if that'd be helpful. Here's an example screenshot below that I got using the steps I mentioned.

              [ATTACH=CONFIG]152267[/ATTACH]
              Can you please post the material which combines the two render targets together ?

              Comment


                @emretanirgan Can you please share your bluprint you used to write the material to a render target ? To me writing the material to a render target does not work for some reason.

                Comment


                  I used, mixed reality sample for UE4 in :
                  https://github.com/richmondx/UE4-MixedReality
                  but when compile BP_VRCapture2D could not find a function named "SetCaptureRenderTarget" in SteamVRFunctionLibrary
                  How can make sure"SteamVRFunctionLibrary" has been compiled for?

                  Comment


                    I don't know if anyone else has made any progress using the Mixed Reality Framework, but I have not only been able to use the Calibrator map to sync controllers to your hands in video, but also how to use any Windows compliant video sources, e.g. webcams and HDMI-USB3 encoders (for camcorders & DSLRs) to composite video & VR camera directly to the screen which can be recorded via HDMI recorder, software based video capture tool, or even as live feed for streaming or a monitor. Sadly there is one issue I have run into and that is an extremely dark image, as though the levels have been crushed dramatically. XR Framework does use a Spectator Screen object, but the one that shown in the headset is well lit. It is just the final output when you hit RUN or compile that it appears dark.

                    If anyone is interested in learning more about what I have discovered I can make a tutorial video and some rough documentation. I anyone out their is already familiar with what I am talking about, any suggestions on how to improved the video output would graciously accepted.

                    Here are few images and a link to sample video on YouTube: https://youtu.be/h3c4eChdhzY

                    Comment


                      Originally posted by mebalzer View Post
                      I don't know if anyone else has made any progress using the Mixed Reality Framework, but I have not only been able to use the Calibrator map to sync controllers to your hands in video, but also how to use any Windows compliant video sources, e.g. webcams and HDMI-USB3 encoders (for camcorders & DSLRs) to composite video & VR camera directly to the screen which can be recorded via HDMI recorder, software based video capture tool, or even as live feed for streaming or a monitor. Sadly there is one issue I have run into and that is an extremely dark image, as though the levels have been crushed dramatically. XR Framework does use a Spectator Screen object, but the one that shown in the headset is well lit. It is just the final output when you hit RUN or compile that it appears dark.

                      If anyone is interested in learning more about what I have discovered I can make a tutorial video and some rough documentation. I anyone out their is already familiar with what I am talking about, any suggestions on how to improved the video output would graciously accepted.

                      Here are few images and a link to sample video on YouTube: https://youtu.be/h3c4eChdhzY
                      Great work! I'd really appreciate a tutorial on how you got here.
                      Twitter: https://twitter.com/a_carter1182

                      Comment


                        This thread hasn´t been active in a while, so I wanted to ask if there is any usable progress right now?

                        We need to buikd a test project in unreal with HTC vive, we have the two controllers and an HTC Tracker we would attach to a camera pointing at a green screen.

                        But we can´t find any documentation on how to set up an unreal project to get us started...
                        This is kind of urgent since we´ve optioned a green screen studio for a test, so any help would be greatly appreciated!

                        Comment


                          Originally posted by SamuelEnslin View Post
                          This thread hasn´t been active in a while, so I wanted to ask if there is any usable progress right now?

                          We need to buikd a test project in unreal with HTC vive, we have the two controllers and an HTC Tracker we would attach to a camera pointing at a green screen.

                          But we can´t find any documentation on how to set up an unreal project to get us started...
                          This is kind of urgent since we´ve optioned a green screen studio for a test, so any help would be greatly appreciated!
                          I'm personally looking into this solution here: https://liv.tv/blog/mixed-reality-htc-vive-tracker

                          It does seem to support only selected titles, however should there be a way to add the engine builds on the go, the tool would be really great for MR capturing.

                          Update:
                          Join the discord, ask for SDK and add games yourself.
                          VR/AR Development [Portfolio | YouTube | LinkedIn]

                          Comment


                            Awesome!! Great find until Epic is ready to provide us with their version.
                            Twitter: https://twitter.com/a_carter1182

                            Comment


                              Hey there

                              can Ben Marsh or any of the Unreal Engine Devs working on the mixed VR Framework give us some information about when we can expect a working version and actually what we can expect?
                              I was quite excited when I upgraded to 4.19 and saw the mixed vr Framework Plugin. However, trying to actually use it turned out to be impossible. Not only is there absolutly 0 lines of documentation, the sample Calibration Map included in the Folder seems to be completly broken (or wrongly used by me, which I can't really tell). I partly got it working one and even then the calibration process just seemed to be unuseable for any end consumer (way too much work and inconsistency compared to some simple Mixed VR Configurators out there for Unity Games). Also it seemed to actually only support in Engine Compositing (which produces Images way too dark) and would not output the depth/ background/ foreground (either per seperated output streams or split Screen). Making it potentially more user friendly, but also very limited in use (seriously, most pcs already struggle wirh vr alone, having to composite the images only on the same machine doesn't seem viable atm). And well it crashed multiple times anyway (never got past the configuration of the controllers).

                              So is there actually any progress on this feature?
                              I guess having it "announced" (rather mentioned) almost 2 years ago and shown of on multiple events (including GDC) justifies some Updates or infos from time to time. It's getting really frustrating (any approach we tried in Blueprints only projects so far resulted in a huge Performance hit and suboptimal solutions). We would already be happy with a simple way to draw out the needed Layers to the screen (4-Split) and then compose it in an external Software.

                              Comment


                                in case anyone wonders how to combine two textures into one

                                https://blueprintue.com/blueprint/5x2fqktf/

                                Last edited by fengkan; 05-12-2018, 06:25 AM.

                                Comment

                                Working...
                                X