I’ve been working for a few weeks on UE4 to adapt it into a CAVE environment.
I would like to share my work, in order to have any advices.
I am working for a French laboratory specialized in VR (http://www.institutimage.fr/, http://www.ensam.eu). We own many devices including an “old” CAVE system. For years now we have successfully been using our own software based on OpenSceneGraph for all our devices. We are now trying to adapt UE4 as the engine for our immersive environments.
Our CAVE presentation
Our CAVE is very specific. It’s 4 sided (front, right, left, bottom), each wall measuring 32.7 meters, the stereo is passive so we have to generate 8 images (2 per walls) with a resolution of 14001050 pixel for each projector.
We don’t have any cluster, we are using a single computer equipped with 2 QuadroPlex 7000 (that means 4 Quadro 6000 GPUs). Each Quadro has 2 DVI graphic outputs ; so we’ve got a total of 8 outputs (which fits all our projectors).
With Nvidia driver mosaic software, we have made 4 extended monitors with a resolution of 2800x1050 per desktop. Each desktop is managed by one Quadro and is corresponding to left and right eye. We have to add black strips on the left and right side of each viewport to adapt the resolution to our CAVE walls.
Here I would like to share my work on UE4, I hope I will have some feedback.
I made a VRPN library able to get the position and orientation of the headtracker. Within a C++ plugin I used a custom actor component which is wrapping the VPRN library. Then, I have coded a custom character actor (like first person shooter sample) which possesses the vrpn actor component.
My custom actor character contains a camera, a capsule for collision and 2 custom properties which are the position and the orientation of the headtracker (given by its component).
On tick event, I am updating the properties from the component, then I just have to update the position and rotation and my custom character by calling setActorLocationAndRotation in its blueprint (or by using a setTransform method in C++).
That works well, my custom actor component transformations are updated when I am moving the headtracker.
Off axis projection matrix
For a CAVE system we had to compute an off axis projection matrix. This guy helped me a bit (Unreal Engine on a multi projector wall display - Work in Progress - Unreal Engine Forums)
Always in my C++ plugin, I made a custom GameEngine class (A new, community-hosted Unreal Engine Wiki - Announcements - Unreal Engine Forums) and I have override the InitializedHMDDevice() method. Into UE4 C++ code, this method is used to create a FakeStereoRendering device, I decided to create and use my own StereoRendering device instead, inherited from IStereRendering class. From this class I had override GetStereoProjectionMatrix(…) method to build my own off axis projection matrix. (don’t forget that UE4 is using a right handed coordinate system, see D3DXMatrixPerspectiveOffCenterLH function (D3dx9math.h) - Win32 apps | Microsoft Docs)
By using a renderingDevice, the viewport is automatically divided in two, which is perfect for my passive stereo system.
Multi process (one process per wall)
I have looked a bit UE4 multiplayer system features, but I don’t think it is fitting well for a multi process application (it was designed for multiplayer games). I decided to use MPI (Message Passing Interface — Wikipédia), it is a free library which permits to launch many processes and allows them to communicate.
So I am creating 4 processes, one for each wall of my CAVE. Each process has a resolution of 2800x1050 and is corresponding to one wall/floor (and one extended desktop). It contains a split screen for left and right eyes. Each eye has a resolution of 1400x1050.
The process 0 is getting the tracker transforms and sending to the other processes with a synchronized broadcast. Then every process computes its view and projection matrix. Depending the process I am adding a rotation, according to the face my process is taking care of (+90 for the right face, -90 for the left face etc….)
This is very specific to our CAVE, so it may not be very interesting for everyone.
Our resolution of 1400x1050 per eye (our video projection resolution) is not corresponding to our 3x2.7 meter walls. I need a resolution of 1160x1050 to fit. So I have created with UE4 a custom post process material which scales the postprocess render image from 1400x1050 to 1160x1050 output image. Then, I am blending this result with a texture which contains two black strips on the left and right sides. I have to hide (1400 -1160) deformed pixels on each sides of the output image.
Positioning and resolution
At least I had to place and set a full screen resolution for each of my process. In my custom character begin play event, I am calling custom methods which are using GetSystemResolution->RequestResolutionChange(…) and -WinX -WinY parameters command to achieve that.
It is working well into our CAVE, but the immersion is not perfect. I don’t feel like I am into the 3D scene (it was a far better with OpenSceneGraph). Maybe my computations are false, but the main problem is that the solution is very laggy and running low fps !
With a resolution of 2800x1050 for each of my 4 processes, I have less than 10fps, and less than 20fps with 1400*525 per process.
It is very important that the application must be very reactive (more than 30fps at least) to feel the immersion.
I have tried to launch 4 samples UE4 applications with the same resolution and I’ve got the same results (in terms of fps). So I guess the slowness is not coming from my code.
I have identified that only one graphic card / one GPU is working, and regarding to this posts, it seems that UE4 is not supporting multi-GPU ….
The ideal config should be using a cluster. So I think I am definitely stuck with my hardware config…
Anyway I just wanted to share the workflow I used to adapt UE4 into a CAVE. With a different system (if you have a cluster), maybe it could be useful to someone.
If anyone wants more detail (code, screenshot…), feel free to ask.
If anyone has advices, feel free to tell.
Thx for reading, sorry for my english ^^