I want to share some stuff we are planning to do with the Engine at the University of Groningen here in The Netherlands. We have a visualisation department (Virtual Reality and Visualisation | Research and Innovation Support | University of Groningen). We have a curved wall display that is powered by 6 hd projectors. We also have a Cave setup that is driven by 4 projectors. Each projector is driven by a seperate PC with a quadro k6000 card in it.
Our facilities ared use for all kind of stuff; from molecule visualization, to psychological experiments and architectural visualization.
For years now we have been using our own software that was based on OpenSceneGraph. We are now looking at UE4 as a replacement. One thing that this will make possible is for people outside our group to make visualizations for our theater. In theory it should be possible to convert a ArchViz project made in UE4 to display correctly in our theater by just adding a few c++ files to the project. But right now there are some engine changed needed but I put up a pull request for them.
Right now I have gotten the engine to run using quad-buffer stereo at 30 fps. To keep all the pc’s in sync I use Unreal’s network capabilities. There is one main pc that is the server and all the pc’s that drive the projectors are clients that connect to the main pc. All the pc’s use the same actor view as the main camera. I had to modify the network code to run in frame-lock step mode with the master pc to ensure all the clients were drawing a frame with the same data.
From there on I make a plugin based on the IHeadMountedDisplayModule (I used this in favor of IStereoRendering so it gets automatically loaded by the engine) that will calculate the right frustum for each of the 6 displays.
We are far from done, some things on the todo list are:
- getting quadro-sync to work, this will sync the vsync signal of all the monitors so you do not get any tearing between the different displays. This is almost completed.
- getting the engine to run in our Cave with head tracking.
- our theater is also a touch screen that uses the tuio protocol, it would be cool to get the input into the engine.
Here are some photos of Epics Sun Temple project running in our theater. The image looks a little blurry because it’s running in stereo mode and you see the image from both eyes at onces.
This is unbelievable! Great work! D:
Fantastic work Pjotr!
This is pretty exciting for me because I’m working as the content developer for a similar facility that’s being built at the University of the Sunshine Coast in Australia (it’s a Mechdyne Cave 2). Currently we’re having to use Unity 3D because it’s the only engine we’ll be able to run in the cave (via Mechdynes GetReal3D plugin).
But I’d VERY much prefer to be using Unreal Engine
Just an update. We got quadro-sync to work and the engine also works in our cave with head tracking, we are in the middle of upgrading our cave but after that I will try to post a video of it.
Another good news is that our patch to enable support for quad-buffer stereo in OpenGL was integrated in the engine and will be available in 4.8 (see info about it here: https://github.com/EpicGames/UnrealEngine/pull/709) Mind you that it is not a complete solution. It just enables support to render to the left and right buffer from your code, you will still need to write a plugin that will make use of it.
We also spotted a bug with temporal AA and off-axis matrices, it’s not yet in the engine but if anyone is having this issue you can try to integrate it from here: https://github.com/EpicGames/UnrealEngine/pull/912
I also was able to sync the simulation time for all the slaves, this required some engine modifications. And in how the engine is set up the modifications are more of a hack that I don’t feel comfortable to make a pull request for just now. When I have time I will see if I can make a more elegant solution so I can also make a pull request for it.
We also have a little VRPN plugin that maps tracker and button devices to UE4 key mappings, if anyone is interested in it we can see if we can get the code cleaned up and posted on github.
I’m interested in the VRPN plugin. Can you post it on github?
Very cool. : ziet er fraai uit
Do you need quadro-sync if you have projectors with sync ports, like the Barco f50 where if you daisy chain them together they’ll all be in phase?
All our projectors have sync ports but if we disable qudro-sync we do see tearing between the projectors occasionally.
Pjotr, do you have a repo for your extra code? We’re looking into Quadro sync too, and I found your email researching custom camera matrices for cave type applications.
Thanks for sharing.
Could you explain how do you implement the Present function of the FRHICustomPresent ?(where i think you should call glDrawBuffer(GL_BACK_RIGHT);and glDrawBuffer(GL_BACK_LEFT); )
For example, how to get the current RenderContext data in your Present function ?
I’m currently working on a masters project with a similar goal, we have a large mutli-montior setup (64x1080p screens) and I’m developing some solutions game engines to render across all of them (not in stereo however). I’m particularly interested in what changes you had to make to the networking code to achieve the frame lock-step networking you described. Do you have any good resources or advice for someone looking to do the same?
Thanks in advance,
We have done implementation of active stereo + vrpn for CAVEs and multiwall screens.
I wondering if community will be willing to contribute if I share our codebase at github.
8 nodes, k6000
Sure I will! And I think quite a lot of people also going to do it!
Do you do it on a single machine or spread clients across several ones?
We used 8 machines for image above.
I am doing something similiar as well for a cave. Can you may explain how to use this quadbuffer stuff? I am trieng to do it on a single machine. All of the beamers are connected to that machine. (2 Screens yet that means 4 beamers overall) the machine got 2 quadro cards inside, thus i do not care about performance yet. its for some testing purposes only and still in a early stage of development.
I am interested in this too. Could share code examples and/or tutorials? I am trieng to do it for 2 screens each using 2 projectors (thus 4 projectors overall) on a single machine. So how did you create more than one frame attached a monitor and camera to it? Also i have not found out anything about how to use quad buffer. Could you please help me?
Thanks in advance
I am currently doing the same stuff.
Need Tutorials as well. I see a lot of people interested in this topic and ask for suggestions.
Hope could provide some materials.