-emulatestereo command not working

I’m needing to output 3d side-by-side, my understanding being to use the command line, “-emulatestereo”, which I’ve seen done during a screencast, but this fails to work on either of my installs of UE4 4.6.1, on a Mac Pro running 10.9.5 and off Bootcamp running Windows 8.1, both being 64-bit. My Mac Pro has dual AMD Pro Graphics Cards, 64 GB RAM. Might I be missing some extensions or drivers? I’m desperate to get this to work, have an important presentation in a week, many thanks for advising.

Hey -

I noticed you asked this question on the forums as well: issue -emulatestereo and more - XR Development - Epic Developer Community Forums

Let us know if marksatt’s explanation of using the command line to add -emulatestereo to the argument list was able to work for you?

Cheers

Hello ,

I see “Staff” by your signature, as in staff at Unreal? All these months later, I’m still as desperate to make progress on this issue. I see a green checkmark on your post, indicating the issue is resolved. Is there something I missed, thanks for advising. In the event you or anyone is still waiting for my answer to your question, I was able to use “-emulatestereo” in the header of a Shortcut (Windows only, couldn’t get this work on a Mac), but the window frame made this unworkable. I then added “-fullframe” and that fixed that issue, alas with over-the-top aggressive fov. I understand UE4 now supports HMDs, which also use side-by-side, but then also have an alpha mask and barrel distortion routine coded in per HMD lens specs. Is there no way to opt out of these additional processes, leaving plain side-by-side? I would think that route should support an acceptable stereo template in terms of interaxial of the virtual stereo cameras, fov, and convergence. Many thanks for an update and for any information you can share.

Hey -

Could you provide a bit more information about what you’re trying to accomplish? If you are trying to use Oculus for Mac, this won’t be possible as Oculus has dropped Mac support.

Hello ,

Let me first say how relieved I am to be connecting with staff, very much appreciate if we can follow this conversation through. When you first posted your reply to my question, I was on the road reading from an iPhone, completely missed the reply, much less who it was coming from.

To your question, our needs vary, but first priority would be basic SxS (side-by-side) output, independent of platform, we have both. My previous email describes what we’ve tried with what results, not to repeat that, but this first requests comes with two ideas for a solution or work around. Since the output for most every HMD begins with a SxS format, all that’s needed is to identify and deleted the two (more?) bits of code that call for an alpha mask and the barrel distortion that HMDs need for their specific lens/display systems. The user simply opts out from those addons, leaving true SxS, a relatively simple solution it would appear.

The workaround, and this is something we’ve yet to test, just thought of it, relates to what I said in the original post from back in February about the crazy aggressive wide fov I got when adding “-emulatestereo -fullscreen” to the shortcut in Windows. Who knows what causes this wide fov, but my thought was to chop fov in half for starters in the Editor, see if that can be used to compensate for the issue when packaging. Your thoughts on that.

At least with SxS I can present to clients where this project is going (developing VR content for a traveling exhibit for international distribution in the science museum space). As we lose 50% horizontal resolution with SxS, what we really need (long term, and want now) is Blue-ray quality 3D HD (and eventually 4K 3D), which if UE4 supported it, is no issue on this end. We work with the firm that licenses the MVC codec to Sony and others, implemented it for us into a Unity demo, but we just couldn’t take the lighting and other issues, so we’re back to trying to make things work in UE4. Understand, our clients in the museum space expect quite literally “museum quality”, which is why we see no option but to stick with UE4, but the 3D aspect itself needs to be museum quality, and this remains a huge problem for us just getting it to work on the most basic level, and consumer level HMDs simply aren’t enough.

While we’re seeing HMD competitors breaking ground on angular resolution in their displays, the VR space is focussed on the consumer market, competitive pricing forcing a dumbing down of the technology, which I agree is temporary, but a nonstarter nonetheless for applications in museums and beyond. A visible pixel grid or screen door effect takes imagery back to the Stone Age circa pre-Standard Definition, though TV began with that ;^) We happily shave off fov in the display (not the game engine virtual camera) in support of proper HD or possibly 4K. Again, we’re working with the firm who owns the MVC patent, they produce their own HMD with full HD per eye, we’ll be working with them to develop a custom display for use in museums, and I’ll be speaking to the CEO this weekend about bringing his plugin to UE4. I’d like to facilitate a real dialogue between Epic and this firm. Can you advise me? MVC, btw allows a single HDMI 1.4 to carry frame packing out to a 3D display, whether built into an HMD or not.

So, thank you so much if you help us and the others writing to this issue here in tracking down real answers to both questions, can we enable basic SxS in support of 3D displays and can we go one better with the MVC plugin, to open a dialogue regarding that possibility.

Best,
Benjy

-emulatestereo is platform agnostic, so it should work on both Mac and Windows without any modification. I verified it locally, and it seems to work! Just make sure you’re actually passing in the -emulatestereo on the Mac app.

EmulateStereo is implemented by the FFakeStereoRenderingDevice in UnrealEngine.cpp. Within it is a base implementation of the IStereoRendering interface. There is no distortion with IStereoRendering, so you shouldn’t have to worry about that!

To change the FOV, you simply need to change the math in FFakeStereoRenderingDevice::GetStereoProjectionMatrix(). The values there were chosen back in the Oculus DK1 days in order to accentuate rendering bugs, so that we could more easily identify and fix disparities.

Ideally, you’d use that as a template for your plugin, which can implement IStereoRendering and plug in in the same way. Hope that helps!

Hello Nick,

Many thanks for the info. I’m admittedly not schooled enough to follow your instructions of what to change which scripts and where things live, am but the creative director and manager for this project. I’m passing this along to my team, a top-drawer CG artist yet new to game engines and a java script who comes from Unity, so we’re all green to varying degrees. If I’m partially getting it, to output SxS we need to open UnrealEngine.cpp and look for an interface for IStereoRendering. The default should work fine, but if you’d be so kind to provide any more details (to newbs) where this script lives, where to look for the interface for stereo, and what to do to enable it. If it’s super obvious, don’t bother putting the dots together too closely, but thanks for more info.

Then, fov and possibly other stereo template settings reside in a script (?) called FFakeStereoRenderingDevice::GetStereoProjectionMatrix(). Where do we find that and is that also pretty clear how to adjust that setting?

I’ll pass this on to our colleague with the MVC codec, see if we can implement it that easily, which would be just great.

Many thanks for your time!
Benjy

Nick,

We’re trying to muddle through with your instructions, but still need a bit of help. We’ve at been able to open UnrealEngine.cpp and see class FFakeStereoRenderingDevice, now trying to make sense of the list of “const float” values. Again, the main concern is this problem with fov, am torn between two ways of seeing this problem; a) the direct approach being to identify the appropriate variable and how to adjust it — thanks for help with that; b) to see if maybe there’s isn’t an indirect cause behind the wildly wide fov, which I’m tempted to believe I’ve spotted in the 4th and 5th “const float” being InWidth = 640. f;, InHeighth = 480. f; These numbers describe Standard Definition, while today we’re going out to HD 1920x1080. My has changed these values, sending me a build, will test this on my 1080p 3D monitor, will report results.

In the meanwhile, I appreciate any useful details. Also, I see “ProjectionCenterOffset”, am tempted to understand this to mean the interaxial distance between virtual cameras, yes? And “GNearClippingPlane”, is that the control over convergence (screen plane, forward/rearward projection, etc.)? Thanks for a tip which of these variable to tweak with what results.

Many thanks in advance.

Benjy

Hello ,

I don’t know if you’ve been following this exchange with Nick, and thanks BTW for escalating to him, as he seems most familiar with the details inside the UnrealEngine.cpp script, but we’re still behind the wall on this, am reaching out to you for additional support. I realize everyone there is most busy, and Nick might possibly not have time for a toddler drooling upon itself asking silly questions about the fundamentals. He can’t hope to educate me from such a low starting point. Please, know we’re doing our best to come up to speed quickly, believe our present questions are more sophisticated.

I posted a comment on Nick’s post, suspected an issue with the “640 and 480” values that are associated with Standard Definition NTSC, experimented changing those to 1920x1080, alas to no avail, project still plays out with super-fisheye fov. I have a hard time believing Epic would set this fov as a default, it’s off the chart. So, what’s causing it? Some detailed instruction on which lines of the script control the various aspects of a stereo template would be most useful.

We’ve been chasing this issue for so long now, and our work is nearing a deadline, very anxious to provide stereoscopic output for a major presentation in two weeks. So, big thanks if you can further facilitate dialing in the correct values.

Best,
Benjy

,

Sorry to come at you twice, maybe you can advise me how best to engage Nick on yet new things we’re learning about the class FFakeStereoRendering. Nick’s post is in green, which I understand to indicate “resolved”, there’s no reply option, only a comment button, which is what I used to attempt querying him for details. If Nick marks his post as “resolved”, how is my comment treated, any differently from a reply? Will he even see my comment? I’m just trying to understand the system, thanks for your help.

Now, onto what we’ve learned, that I’m hoping Nick or someone at Epic can shed light on. We’ve massaged the math on not just fov, but on every variable and not seeing the first difference in the output, which logic suggested that our changes to the UnrealEngine.cpp aren’t enough to effect whichever variable. My picked up the idea that indeed, the changes to the script need to first be compiled, not simply saved to the script. So, how are changes to UnrealEngine.cpp compiled?

Of course, any other tips on how to make sense of the Projection Matrix values and such, all those issues from my previous email, your help facilitating there as well is mucho appreciated. Geez, I’d gladly pay for consultation to make progress and make this go away, but it appears nobody outside the half-dozen folks on this forum who’ve inquired about this same issue cares about it, and I see no alternative to resourcing the problem. So, many thanks for your help.

Benjy

Hi Benjy -

Sorry for the delay!

Trying to merge together the questions from your replies, so hopefully I catch them all.

In the GetStereoProjectionMatrix() function, the values are as follows:

  • ProjectionCenterOffset: This is the center offset of the projection matrix, which represents something that is HMD specific. You can safely set this to 0. The actual camera separation is handed in CalculateStereoViewOffset()
  • HalfFov: This represents half of your desired FOV, in radians.
  • InWidth, InHeight: As you suspected, this is the height of the SxS rendering field for ONE eye. In our case, we picked 640x480 for debugging convenience. It would correspond to a 1280x480 screen. If you’re doing SxS on a 1920x1080 screen, you want to use 960 for InWidth, and 1080 for InHeight.
  • GNearClippingPlane: The near clipping plane distance from the camera. Shouldn’t need to change this. We use infinite far plane projection.

When you change those values, since they’re in the engine, you need to be sure you’re rebuilding the engine, and not just your game module. Otherwise, you won’t see any changes!

That should get you as far as modifying the default -emulatestereo SxS rendering path. If you want to make your own in a nice module, you’ll have to copy that whole FFakeStereoRenderingDevice (say, FMyStereoRenderingDevice), and put it in your game module. Compile, and then in your module’s initialization, you’ll have to create an instance of your stereo rendering device copy, and then set GEngine->StereoRenderingDevice to that instance when you want to turn on stereo.

Hope that helps!

Hello Nick, so glad to know your’e there and to hear from you. You sense I’m coming from way outside video games, background being hardcore caving adventure documentary, my foray into 3D leading me to virtual environments for an exhibit in science museums. I only add this to excuse my toddler-level questions. But, big thanks for the detailed response. I’m personally not the guy to compile or to modify the default, but I get the distinction and I’m sharing this with my support in Costa Rica, so if Yoshi has further questions, I’ll let him jump in on this. Again, many thanks for being so responsive, my team has been going in circles on this issue for some time, great to have access to somebody who KNOWS.

Best,
Bejy

Hello Nick,

Steps forward and one backward. We downloaded the source code for the Engine from GIT Hub, edited the HalfFov value and InWidth/InHeight, compiled and tested on a SxS monitor. I’ll describe results, but first our settings:

(default) const float HalfFov = 2.19686294f / 2.f;

(This looks like a ratio, is it? If so,it’s almost 1:1, thanks for clarifying)

We changed this to:

const float HalfFov = 2.19686294f / 4.f; (which seemed logical, based on the super wide FOV we were seeing)

Also we changed the InWidth/InHeight values from 640/480 to 960/1080. Had we only changed that, would you expect the default FOV and other default values to be acceptable 3D? I ask that because of what I see with this change combined with changing HalfFov. That is, before setting the 3D monitor to SxS, I see the L and R rasters side-by-side as expected, but two things are out: The FOV is now half of that in the same build, but omitting the -emulatestereo command line (the desired FOV independent of 2D or 3D). That’s why I’m tempted to believe the default value for HalfFov is actually correct. The second issue is before setting the monitor to SxS I notice that both L and R rasters aren’t anamorphic, the perspective is correct for 2D (per eye), and so predictably when I set the monitor to SxS these rasters are not unstretched back to a normal raster, but from normal to stretched.

So, I need to return the correct FOV and the normal pixel aspect ratio, question being if the solution is either to simply change the InWidth/InHeight for 1080p monitors?, or some other combination? As you say, the ProjectionCenterOffset isn’t relevant here, the GNearClippingPlane also not relevant, that leaves Projection Matrix. If you can shed light on how that box works, which could save tons of time arbitrarily populating values to observe a pattern, etc. Fingers crossed this is simply about InWidth/InHeight needing the update to HD.

Many thanks for all your help!
Benjy

Hello Nick,

In the meanwhile, we’ve kicked out more tests, getting closer but no cigars.

const float HalfFov = 2.19686294f / 4.f;
const float InWidth = 960
const float InHeight = 1080
const float XS = 0.5f / tan(HalFov) / InHeight;

The combination of changing the ratio from (nearly) 1:1 to 1:2 in the FOV and halving the XS value, we’ve now matched the FOV in -emulatestereo to the 2D build, the anamorphic raster returning to square pixels on a 1080p 3D display in SxS.

Viewing the L and R imagery in overlay I note that the point closest to the camera in both L and R views converges, thus defining screen plane. All other points diverge into positive parallax, as you said, “we use infinite far plane projection”. The issue is the amount of parallax is way aggressive, expressed as a percentage of screen width comes to roughly 15%. The film industry follows super conservative rules for 3D for total stereo budget being 3%, with only 2% being for rear projection. This 15% figure looks suspiciously close to the default ProjectionCenterOffset, 0.151976421f; . You had recommended we use a value 0 for that, but suggesting this had only to do with HMDs, not the interaxial of the virtual cameras, which you say falls to CalculateStereoViewOffset(). I don’t see a value for that in the script, but here’s what we’ll try (in lieu of your weighing in earlier);

const float ProjectionCenterOffset = 0.0f;
const float CalculateStereoViewOffset = 0.5f;

I’m leaving out other code here, but is this generally the format to use? Also, how to shift the convergence point? I see the line for GNearClippingPlane and that whole FMatrix grid of values, confused how to bring the convergence point forward to the camera (desire about 10% forward projection, our lighting in the cave supports it, screen violations minimal).

Okay, so unless I hear from you earlier, we’ll test this and report results. Even if we succeed, I greatly appreciate your filling in the other details, we’re so close!

Benjy

Nick,

We’re tackling and falling and tackling again, very eager to further engage you. I’m pulling Yoshi Morales in on the conversation, you speak more the same language. With it taking an hour every time to compile a new engine, he’s pursuing your suggestion to produce our own nice module in which we can get quick feedback on behavior and fix various issues, just need a little guidance how to accomplish that. If you can spare the time, we’d be so very grateful.

What’s especially interesting now is that we’ve signed off with the folks owning the MVC patent to implement their full HD 3D via frame packing into UE4. Between the Engine, their beauty resolution, and our data-based virtual environment content, we all come out ahead, a fine drop of VR trickles out the end of the pipeline.

In the meanwhile, we’re hammering on basic side-by-side. I’m attaching an image with various graphics illustrating settings in the .cpp and results on a 3D monitor. I should have caught this on day one as it’s truly basic, was too caught up in the wild fov and parallax issues to even consider this, but the default FFakeStereoRender template produces channel swapped. Where is that controlled? It appears simply changing the EyeOffset value only pushes the convergence point closer to the camera, another mystery. You indicated the ProjectionCenterOffset value was only relevant to HMDs, but I’m wondering if this shouldn’t be returned back to 0 for a 3D monitor.

In any event, many thanks for your time, and thanks for helping Yoshi with implementing a module and any other issues raised here.

Best,
Benjy

Hello ,

I hate pestering you with my questions directed to Nick, but I’ve been striking out getting a reply from him, am hoping you can help facilitate again as you so kindly did previously. We’ve gone as far as we can playing with the math in the FFakeStereoRender class, made some progress on things like fov and targeting the correct output raster, but it’s still behaving very unpredictably in other ways, e.g. EyeOffset seems only to push convergence point forward, and the default 3D output is channel swapped. I’m hoping Nick can help us with some of these issues and instructing how to build our own module, so we don’t have to compile a new engine with every experiment, as that will greatly improve our learning curve. Many thanks for your help.

Benjy

Hello Nick,

Many thanks for helping us out with these questions, on top of Benjy’s concerns I have a couple regarding the source side implementation of the custom module:

  • What kind of class should this module be? Actor, Camera manager? or just a blank class.
  • Imports needed, so far I have StereoRendering.h but do I need another one?
  • Once implemented this class how can I create an object of this class from the editor, with another class?

Sorry if these question sound really dumb, we are just getting started on the code end of Unreal.

Many many thanks again for your help.

Yoshi Morales

Hello Nick,

You’re covered up, yet I come back with new info and more questions. I now realize why the convergence point is moving when we change the EyeOffset, because convergence isn’t locked to a particular value. In fact, this appears to be our last hurdle, where is convergence set? I understand the GNearClippingPlane, which isn’t convergence point or screen plane. I see nothing that says convergence, though I’m curious if convergence isn’t controlled via rotating the cameras, toeing them in. How does one rotate for yaw?

In other news, Yoshi is close to creating our own module, still needs some help there, if you can address that as well. Many thanks for carving some time for us. We’re close to wrapping this demo, looking pretty amazing, so kicking out in 3D will be icing on the cake, just need one last push. Thanks for that.

Benjy

Hi Yoshi and Benjy -

Apologies for the delay. We’ve been very heads down trying to make a deadline, so I didn’t have a chance to respond.

I’ll try to distill all the questions in your posts down to this one, to hopefully get them answered.

First, the CalculateStereoViewOffset() function is what changes the distance between the virtual cameras in the engine automatically. In the default implementation in the FFakeStereoRenderingDevice, you can modify the “EyeOffset” variable. That’s half the IPD of the user. The default in there is 3.2, which implies a virtual IPD of 6.4 cm. Changing that will change the separation, and the amount of parallax you see.

ProjectionCenterOffset, which is in GetStereoProjectionMatrix, actually shifts around the resulting projections, changing where the images converges for you. If you look at the math in there, it’s a translation that’s applied to the final projection, so it basically shifts the projection around on the projected plane. Usually, this is used for lining up with optics on the HMD, but you may need to use it too, depending on your setup, in order to make the image converge. However, if you want to change the amount of parallax, adjusting the IPD in CalculateStereoViewOffset() is the way to go.

For Yoshi, the module should be set up as a module. Unreal doesn’t associate one module with one class, so you’re free to have as many as you need in one module. If you need a template, the SimpleHMD plugin is a great place to start. You can copy that, and then just replace the FSimpleHMD class with your class, which is based off of FFakeStereoRenderingDevice. The difference is that you’ll just be making a stereo device, and not an HMD, so make sure you inherit from IStereoRendering, instead of IHeadMountedDisplay. You’ll also need to set up the engine to use it. I’d suggest doing this in StartupModule() for your module, and just directly set GEngine->StereoRenderingDevice to a new instance of your class.

1 Like