-emulatestereo command not working

Once you do that, there’s no need to instantiate any class. That will register the stereo device, and you’ll begin rendering in stereo. Also, StereoRendering.h is needed, and you may need Engine.h, but since your module is very simple, that is probably all you need.

Hello Nick,

I knew you’d show up, many thanks for the follow up. Actually, we scored just earlier today with acceptable 3D in side-by-side, what a relief. Indeed, the default is channel swapped, left camera goes to right eye and vice versa, so we used a negative value on the EyeOffset to compensate and half the default value (neg) at -1.6, while still somewhat aggressive, did the trick.

If I’m following you, ProjectionCenterOffset controls convergence point. We set this back to 0, per your original instructions, and while we also had another variable in play that might account for the fix, namely const float XS = 0.5f/tan(HalfFov);, the convergence is now acceptable. To note, the zero value here doesn’t yield everything rear projected, looks like screen plane is a bit behind the tip of the gun in the first person shooter template, with the gun and hand or whatever all forward projected. It’s not too bad the way it is.

Now onto full HD 3D via MVC codec, will forward you good info to Yoshi to see if he can follow the yellow brick road to a module. Many thanks for your continued help, even if you’re taken away by your work. We might be back, but in the meanwhile, big thanks. I hope this discourse helps (the handful) others out there seeking to output SxS on standard 3D monitors without dependence on proprietary boards and such downstream of this otherwise (deceptively) simple bit of scripting.

Best,
Benjy

Thanks a lot Nick, You have been most helpful,

I tried what you explained and it looks like it’s getting somewhere but I’m not able to see the StartupModule() function working at the beginning of the game, here’s what my module .cpp looks like (sidebyside). I added the simple hello world on the start just o make sure that the function is being called. can you please take a look at it and perhaps let me know why isn’t the function being called?

#include "SideBySide.h"
#include "StereoRendering.h"
#include "RHIStaticStates.h"
#include "Engine.h"


class FMyStereoRenderingDevice : public IStereoRendering
{
public:
	virtual ~FMyStereoRenderingDevice() {}
 virtual bool IsStereoEnabled() const override { return true; }

	virtual bool EnableStereo(bool stereo = true) override { return true; }
...


};

IMPLEMENT_PRIMARY_GAME_MODULE(FDefaultGameModuleImpl, SideBySide, “SideBySide”);

class FSideBySide : public SideBySide

{
	void FSideBySide::StartupModule()
	{
	
		GEngine->AddOnScreenDebugMessage(-1, 5.f, FColor::Yellow, TEXT("HELLO WORLD"));

		TSharedPtr<FMyStereoRenderingDevice, ESPMode::ThreadSafe> FMyStereoRenderingDevice(new FMyStereoRenderingDevice());
			GEngine->StereoRenderingDevice = FMyStereoRenderingDevice;

	}

};

Hi everyone,

I this thread with lot of interest for I have a quite similar objective eventhough I do not need to reach "museum quality’. For now on, I simply want to use the UnrealEngine on a 3DTV with active shutter. I just started using the UE recently, so I recognize myself in such a description :

toddler drooling upon itself asking
silly questions about the fundamentals

Anyway, reading these lines helped me a lot !

First of all, I understood that there is an example within the UnrealEngine.cpp : “FFakeStereoRendering” which implement IStereoRendering and that is called when the project is launch from terminal :

Path/to/editor/UE4Editor.exe Path/to/project/Project.uproject -game -emulatestereo -fullscreen

I’ve been, like you, rather shocked by such a poor 3D experience but I understood (thanks to you) that the problem comes from the settings of some parameters like InWidth and InHeight that needs to be turned to 960 and 1080 to match 1080p screen.

With regards to this paragraph:

My picked up the idea that
indeed, the changes to the script need
to first be compiled, not simply saved
to the script. So, how are changes to
UnrealEngine.cpp compiled?

I turned of the " only", modified the FFakeStereoRendering and tried to compile from both the Editor and VisualStudio, but cannot manage to see any difference. I noticed that “CrashTracker.h” and “STestSuite.h” fail to be included so despite the lack of error message, I guess the compilation doesn’t work.

However, the idea seems to copy-paste this code into a new class outside the UnrealEngine.cpp and to use it as rendering device. It has been like a month since the last reply. Did you manage to get it working?

You put
IMPLEMENT_PRIMARY_GAME_MODULE(FDefaultGameModuleImpl, SideBySide, “SideBySide”);
and gave
class FSideBySide : public SideBySide
Could you, please, give the SideBySide aswell? Looking to this code, I expect to see class FSideBySide : public FDefaultGameModuleImpl

Comparing the following to your StartupModule, I see several differences.

virtual void StartupModule() override
{ 	InitializeShooterGameDelegates();
FAssetRegistryModule&
AssetRegistryModule =
 FModuleManager::LoadModuleChecked(TEXT("AssetRegistry"));

	//Hot reload hack
		FSlateStyleRegistry::UnRegisterSlateStyle(FShooterStyle::GetStyleSetName());
	FShooterStyle::Initialize();
	UShooterGameKing::Initialize();
 }

Could the missing “override” keyword be part of the problem? Aren’t the Initilize() supposed to be important? I understand that the aim is to set
GEngine->StereoRenderingDevice = FMyStereoRenderingDevice;

Also, on this page, it look pretty annoying to have a working PRIMARY_GAME_MODULE. Have you find a solution?

Hello VRad,

Toddler no longer drooling, but diaper mishaps still frustrating.

Regarding your experience of the default stereo template being just awful 3D, glad you found the info useful, and here are the final settings we found to produce not only most acceptable 3D, but close fov compared to a normal 2D build:

virtual void CalculateStereoViewOffset(const enum EStereoscopicPass StereoPassType, const FRotator& ViewRotation, const float WorldToMeters, FVector& ViewLocation) override
{
if( StereoPassType != eSSP_FULL)
{
float EyeOffset = -1.60000005f;
const float PassOffset = (StereoPassType == eSSP_LEFT_EYE) ? EyeOffset : -EyeOffset;
ViewLocation += ViewRotation.Quaternion().RotateVector(FVector(0,PassOffset,0));
}
}

virtual FMatrix GetStereoProjectionMatrix(const enum EStereoscopicPass StereoPassType, const float FOV) const override
{
	const float ProjectionCenterOffset = 0.0f;
	const float PassProjectionOffset = (StereoPassType == eSSP_LEFT_EYE) ? ProjectionCenterOffset : -ProjectionCenterOffset;

	const float HalfFov = 2.19686294f / 4.f;
	const float InWidth = 960.f;
	const float InHeight = 1080.f;
	const float XS = 0.5f / tan(HalfFov);
	const float YS = InWidth / tan(HalfFov) / InHeight;

Note a negative value for EyeOffset, this to compensate for the default being channel swapped. How this ever made it past the developers, my eyes hurt just thinking about it.

Now, onto your main question about compiling. As noted above by Nick, who I’m thinking is still way consumed with work related to release of UE4 Bullet Time and not having responded here to the question of how to push through the second and preferred approach of compiling the FFakeStereoRendering class into a nice game module, you see in his early comments above these two choices, 1) edit the class within UnrealEngine.cpp in Visual Studio or XCode, compile, then open into UE4 Editor everytime you wish to work with a project making use of the class, or 2) build your own module into an engine that then lives in the Editor right next to other engine versions living there. So, still wishing to resolve that second approach, and maybe we can both be of help to one another here by comparing notes. As to how we finally got everything moving using the first approach, and we were only able to do this on the Windows side via Visual Studio because the most recent version of UE4 won’t compile in XCode under Mavericks and I’m not ready to take the plunge update a slew of expensive software that’s moved into subscription models in upgrading to Yosemite or El Capitan. So, it appears you’re already clear on how to edit UnrealEngine.cpp in Visual Studio, to then compile and save that engine as thing you’ll open every time in Visual Studio to launch the Editor. If no, say so, if yes, then the only thing left to do, to my understanding (am working with a and will check with him), is to 1) cook a build of your project through this routine, and 2) add “-emulatestereo” to the Shortcut path in Properties. There might be one step I’m leaving out, which is what directs UE4 to look for that particular command, will check with my .

I hope that helps. Once we kicked this out via HDMI to a HD 3D projector, the results look great, performance hit from that second virtual camera seemed negligible, though don’t have numbers yet to see just what.

Best,
Benjy

So, still wishing to resolve that second approach, and maybe we can both be of help to one another here by comparing notes.

I would gladly teamwork on this resolution. Sadly, I have upcoming deadlines for other non-UE4-relative projects and will probably not be able to be fulltime on it.

For curiosity purpose, I would like to understand the logic behind the changes.

I know that the Inter Pupillary Distance is given through EyeOffset. The mean IPD is 65mm (standart deviation : 3.7) and 62mm (3.6) for female. I that the choice of 64 mm in FFakeStereoRendering is historicaly bound to the Oculus DK1. I, therefore, understand the initial EyeOffset = 3.2f. You obviously divided this value by two, but I assume that it is nothing related to the IPD since 32mm wouldn’t be valid. This factor come back both on:

const float HalfFov = 2.19686294f / 4.f;

and

const float XS = 0.5f / tan(HalfFov);

Could you please give some explanation about it?

Concerning the first solution, I think that my mistake was to try to compile the installed version. On this page

Most developers will have everything they need using the installed version of the engine, but if you want to modify the engine directly, or contribute your changes to the community, this is the right place for you.

I thought that removing the " only" permission on the UnrealEngine.cpp would allow me to modify it, but it seems that the compiler does not take my modifications into account. I will download the source code from github as soon as I can.

Concerning the second solution, your

TSharedPtr<FMyStereoRenderingDevice, ESPMode::ThreadSafe> FMyStereoRenderingDevice(new FMyStereoRenderingDevice());
             GEngine->StereoRenderingDevice = FMyStereoRenderingDevice;

Sounds pretty good when comparing to this thread. Setting the GEngine at runtime is definitly the direction I will aim to since a feature allowing to chose either Oculus Rift or 3DTV as output would match perfectly my needs. It looks like he modified the game engine to solve his problem, but creating a custom derivative of FFakeStereoRendering would be a better approach :

I would still be interested to hear if there is a more natural way to accomplish this behavior without altering the engine header and implementation.

I’ll try to contact him aswell, I guess he could be interested into joining us in this thread. Let’s keep in touch,

Sincerely,

Vincent

Hello Vincent,

Regarding the parameters feeding the stereo template, you asked me to explain the logic of doubling some values, halving others, will do my best, but wish I could talk with my hands or point at imagery, as we’re talking mental constructs of visual paradigms with various moving parts. I’ve confused myself just trying to follow what’s happening, much less explaining it, but they say teaching is the best way to learn, so here goes.

The EyeOffset controls the inter-pupilary distance (in film we prefer “inter-axial” or ia, since “pupilary” implies human), but like early 3D cinema rigs (beamsplitters), the ia isn’t locked to convergence, so picture two virtual cameras and their respective projected frustums. Whether the cameras are in parallel or shooting converged (toed in), the principle is the same, am attaching a pic which uses the shooting converged approach. Maybe, you already know this stuff, but I’m not assuming that and maybe it’s helpful to others down the road. Where all those lines cross is the convergence point which determines screen plane, but what you want to image is what happens when you now change ia without locking convergence. You pull the cameras apart without at the same time toeing in the cameras, and you see how a greater ia moves the convergence point farther away, with the unintended effect of forward projecting any subject matter in the scene that’s in front of the new convergence point. Note, we halved ia with “float EyeOffset = -1.60000005f”, which is a negative value and used to correct for the default being channel swapped.

At the same time, you saw we returned the ProjectionCenterOffset to 0.0, which is because we’re not going out to an HMD with a super wide field of view. Keep in mind the fov of the display is a different animal than the fov of the virtual camera in the content.

Regarding issue 2, it seems you’re in motion figuring out how to write a module that lives inside UE4, rather than compiling a special engine via Visual Studio or XCode, let me know if you manage to make that work.

Best,
Benjy

Hello Nick, it’s been a while, am finally circling back to dialing in the stereo template best for this project. I’m good with editing the FFakeStereoRenderingDevice class in VS 2013, able to package exe and 3d looks quite acceptable with our settings. FYI the defaults for float EyeOffset are channel swapped, left eye data goes to right eye and vice versa, big 3d no no, but moving on. In our project the user picks up a flashlight at the entrance to this cave, which then lives at the bottom of the screen locked to the camera and shines dynamically wherever user navigates and looks. I wish to adjust the convergence point (set screen plane), have through your responses here, but remain unclear which variable controls this. I understand if you change one parameter, you’ll likely need to change another, as the other values aren’t locked, they all work in a system. Thanks for advising.

Hello Nick, it’s been a while, am just now returning to efforts to dial in the stereo, hoping you have a sec to respond. After massaging all the combinations I’m now fairly certain of the one group of settings appropriate for 1080p 3D SxS output, with the one possible exception being preference for fov. I’m attaching the FFakeStereoRenderingDevice class here, and have some conclusions, please tell me if I’m mistaken.

You said, “We use infinite far plane projection…”, wasn’t sure what that meant, but I’m lead to believe this means the screen plane (convergence point) is always the farthest object in the scene. Yes?

With the present settings I get a camera perspective that’s not distorted horizontally relative to vertically, the fairly wide fov (more conducive to 3D) being the only type of distortion introduced. I played around with every variable in the class, including EyeToSrcUVScaleValue, and could not effect a change in convergence. I’m thinking there’s no way to set convergence, no?

That would be unfortunate. In one of my levels the SpectatorPawn BP has a flashlight in hand that dynamically lights the cave as you go exploring. The problem is, with a robust EyeOffset of 1.6 and everything forward projected, this flashlight isn’t where it belongs. I’m not against some forward projection, or even a lot, but the only people in the 3D world I’ve ever known to forward project everything all the time is NASA in their 3D Imax films, where in the dark theater and huge screen they can get away with it (for the most part). The standard Hollywood 3D stereo template typically allows for 1% negative parallax and 2% positive, which is very tame and even lame, but putting all 3% (or higher percentage, based on EyeOffset) out in front of the screen invites numerous edge violations and just doesn’t make sense in the way stereoscopic subject matter relates to narrative.

Any thoughts? Thanks.link text

Did you ever find a solution for creating a custom module?
Would you be willing to share this?

Thanks,

We didn’t. Always a challenge staying on the radar.

hello,

i’m new on unrealengine coding. can someone help me to how to use this codes to modifying the engine source code for recompiling to enabling SBS 3D on unreal. my engine version is 4.14.3.

thanks alot.

You need to download Visual Studio 2015, also the source code to UE4, which is done one of two ways and with slightly different outcomes. If from Epic Launcher> Library you link on Grab the source on Git Hub, you can either download the files, which you can then make changes to the engine in VS 2015, or the preferred route, use Git Hub to make a branch of the engine on your machine, the advantage here being that whatever modifications you make can a) be shared with the community, and b) make it easy to keep that modified engine up to date against UE4. I recently had some help with the latter, and when I get back to town I’ll see if I can’t just make the code that worked for me available to the community, will post an update here.

As for how you use Git Hub or go the other route, you’ll need to through the instructions in that Launcher link or have others help you over the forum. It’s been too long since I’ve dealt with the simpler download route, and then more recently I had a friend walk me through setting up Git Hub, not so complicated, but I’m not the one to help with that.

As for the code, why don’t you first get the UnrealEngine.ini going in VS 2015, and by the time that’s done, I’ll be back with help on the code.

yup … i know howto use VS and compiling and github :]

but … just changing the codes on UnrealEngine.cpp needs to achieve the true SBS 3d?

Yes, follows standard SBS for 1080p monitors.

When you’re in VS editing the engine, search FFakeStereoRenderingDevice, you’ll get to the pertinent code. Set float EyeOffset = -1.6 I believe this parameter sets the interaxial - distance between cameras, important you set this as a negative value. I discovered that whoever wrote this code originally and using positive values either didn’t know what they were doing or were doing something for a very different display architecture, as the positive values produced channel-swapped 3D (pseudo 3D), a sadly common mistake by someone not understanding what they’re looking at.

Set const float ProjectionCenterOffset = 0.0; This parameter I believe was intended for HMDs and the projection used to map pixels to that architecture, not appropriate here, we null it.

Set const float HalfFov = 1.5f / 4.0f; This one you can play with to your liking, it’s your fov, expressed as a ratio it seems, probably relates to the squeezing of the horizontal aspect in SBS.

Set const float InWidth = 960.f; if I remember the default assumes a 640x480 raster from standard def days, chops horizontal in half, so half of 1920 = 960.

Set const float InHeight = 1080.f;

Set const float XS = .5f / tan (HalfFov); I believe this relates to convergence point, but careful here, these parameters function in a system, change one and another parameter can be thrown out. In the early days of live action 3D cinema, which I’m trained in, 3D beamsplitter rigs functioned this way, if you changed ia and didn’t account for how that moved the convergence point, you invited unintended results. We cheered when Element Technica locked convergence such that when you programmed the cameras to pull apart, they automatically toed in to keep convergence fixed on the same point. On this point, I don’t love where these settings place convergence for all situations, and maybe you’ll learn something that you can in turn share with me. I had locked a flashlight (prop with light built in) to the VR_Character, you light up this scan-based cave model as you navigate, which with these settings forward projects this prop way too aggressively, something like 25-30% forward of screen plane. I’d prefer 10-15%. It not only calls too much attention to itself, in this dark world you invite ghosting and I can sometimes feel my eyes straining to accommodate the 3D, just on the prop, not the background cave.

That last const float YS = InWidth / tan(HalfFov) / InHeight; believe this default code creates the squeezed SBS. Good luck, keep me informed.

thanks for your reply :]

you mean change the code something like this?

or like this?

and another question ,
what about 3d projectors on 3D cinema?? does changing this parameters to this values gets the true SBS?

SBS is a standard format for various display technologies, output from doesn’t know or care what’s at the end of an HDMI cable or such, a 3D monitor or a projector, this works on any display system supporting SBS.

Of the two screenshots, capture2.jpg is more like it, but you went off map in several instances with your own values, which won’t work, use my values.