and by the way: Why the plug-in has 2 funktions for the yaw when they are useless? SP.ShouldOverwriteIntialYaw and SP.ForcedInitalYaw are 2 parameter in the plugin. I think epic changed some things in the code and now these 2 are useless anymore, because it doesn´t grab the yaw of the camera in the sequencer anymore. ;-(
That’s because people realised it’s a terrible idea to attempt to try and rotate the camera. A 360 camera has no forward direction, it’s viewing every direction. You don’t get distortion if you try to do this in post, after effects and premiere can both handle this as well as the skybox plugins. The point is though that if you are rotating the camera then you are effectively rotating the players head. So if you turn it 90 degrees to the left it’s like grabbing their head and turning it. If they turn their head the opposite direction at the same time it would be like you reversed the head tracking.
The way to tell a story is by visual and audio elements within the scene. You never want to force somebody to be doing something in VR
Hm, i agree with you with a many points but not all. You are right, that you should not draw the head fast in one direction because you can get sick of false movement. The only think i want to use the yaw control is, where the cam is looking first in a scene. I have a forest scene with a character running on a river bed. Now the output image is looking somewhere in the scene let´s say directly into a tree or bush and not in the face of the running character. Of course the viewer could change his direction, but believe me before a person new to VR is changing direction, you have to say him so: Look down, look up, right … and then they get used to it. So it is useful to direct a camera to a point where the film is starting. I agree with you, that if you just want to show a static scene it is best not to move the camera or let the viewer do it with the headset, but for the start or in some specific cutscene you want to drive the viewer where something is happening or a lot of things that is going on in the scene won´t be seen by a viewer, right?
So using it very subtle is the point here. You should not drive the cam like crazy in all directions, but should aim at specific scene elements if you want to, like in 2D. That´s how i understand VR and 360° videos. I am doing it now for 2 years and was there at the beginning and learned a lot with all the goggles and software i used. I have to search harder for a solution for the yaw of the camera. I will get it, like i always find a solution for it.
By the way did you successfully change the viewpoint of the cam in adobe premiere? I didn´t find something like this, maybe you have a hint about that. Now i am only able to make a film out of the pics from unreal engine. Have to resize it 50% for the top and bottom picture and arrange it for a resolution of 3840x1920 for the Gear VR. Then stereo 3D 360° Video work for the Gear VR.
Here in the editor you see the cam is pointing to the character, everything will be fine:
But in the output the character is not in the center it is a tree, looks like 90 degrees turned by the output cam. Ok the viewer could turn the head now for 90 degrees, but i think that´s not right and i don´t want it:
Yes when a scene transition happens you want to adjust the forward direction to be where they should be looking. I doubt you ever want to be pointing them downwards though, that would be very sickening.
You do these fixes in post, you don’t need to control it in unreal. https://helpx.adobe.com/premiere-pro/using/VRSupport.html#ConfiguringthemonitorstodisplayVRVideo
Have a look down there on the “360-degree panning”, that is what you are looking for.
This is 1 I worked on, these were all different levels but since we had directional movement on the camera we needed to make sure forward when the scene changed was forward on the headset too. We just export out the video and then in premiere you can adjust each scene and set its forward direction. We actually use some extra plugins to do this in after effects too and handle our colour grading settings and things in there better too but premiere is better if you aren’t using paid plugins
Amazing work. How long it lasted render?
We ended up using the mono panorama exporter on this. We rendered out in 8k EXR format at 60fps and it took about 5-6 seconds per frame. Mono is a hell of a lot faster than stereo.
how to configure for render in mono panorama?
We used that for mono rendering. A hell of a lot faster and easier to setup than the stereo plugin.
which is the limitation with the postprocess?
can anyone share how to modify the scenecapturer.cpp
I’m not a programmer nor have experience recompiling.
I have the plugin working fine but cannot make it to export the left and right eye together and also will be good to have it as jpg since png takes too much space.
Thanks for sharing, can you tell me more steps by using 6 cameras method and mettle plugin?I follow your instructions to do.but the result look weird when i put in mettle.May i know how to fix the unreal camera resolution to rectangle?Actually i make the 6 cameras in maya.And then export them to unreal.But i facing problem that the output resolution are not rectangle.I try to change the filmback setting sensor width and height to 20.48mm,but also cannot work i mettle.Pls help me…
seeking help…360 camera in unreal with 6 cameras
Thanks for sharing, can you tell me more steps by using 6 cameras method and mettle plugin?I follow your instructions to do.but the result look weird when i put in mettle.May i know how to fix the unreal camera resolution to rectangle?Actually i make the 6 cameras in maya.And then export them to unreal.But i facing problem that the output resolution are not rectangle.I try to change the filmback setting sensor width and height to 20.48mm,but also cannot work in mettle.Pls help me…
thanks for sharing
Hello! I am currently try to make the plugin work on with my project in 4.15. We have the plugin turned on, however whenever we try to send the pictures to our designated folder using the console nothing shows up. We followed the directions step by step. Does anyone possible know what we are doing wrong?
Do you guys know of any software to create a “navigable panorama experience”? I mean like Google Street View where you can navigate between 360 panoramas and get a nice continuous transition between them.
We use Pano2VR to make 360 panorama tours. In the software settings you can choose to create a smooth transition.
Hi Eloo, sorry for the late reply I didn’t see your message before.
To be honest there is a while I did this, sequencer didn’t existe at this time.
As I say the goal is to get 6 vue of your scene as if you are inside a cube, the mettle plugin (again it maybe change during this time) is a cube box in 2d (like UVs) and you have to put each video (front, back, left, right, top, bottom), in the right order.
Inside unreal what I did was created a setup of camera where one is the master and animated (ex the front one) and the 5 other camera are attached like a children to the master one.
Each camera are in square aspect ratio of “1” (not rectangle), and a field of view of 90° (it was for the simple camera not the cineCameraActor, I don’t know how to adapt the field of view to the sensor camera).
Each camera are setup so each one look at each face of this imaginary cube from the exact same center point.
Hope this help more
anyone know how to use it ? capture pawn
what you did?
you create a director group and added the empty group on it, then it works? i just realize i did the same mistake here.
I must be misisng something, anyone could take a look and see what i’m doing wrong?
What i want?
capture a single camera track of matinee or sequencer (tried with both)
i have followed all instructions on this topic, about change on game settings the framerate to be locked on 60, added the arguments: ‐usefixedtimestep ‐fps=60 ‐notexturestreaming on stand alone launch
i configure the nodes (on image) on my open level blueprint
what is happening
every time i start the standalone game, and press space to run the console panoramic commands and start rendering the frames of my camera track it “works”
i can see on the folder the frames start writing, but for example: if i configure it to render 0 to 100 frames on a 10 second camera track movement, this render like 90% of frames are the very same or with a slight change on camera tracking. So it write a bunch of same frames and i cannot see any changes when i preview it using the default windows photo app and holding the key to move to the next photo. When this came to the 100th frame, there is no more frames to render and the camera track ran the rest of it’s movement without recording any frames
At least this happened on a test i have done, with matinee i can see it changing to the camera tracking, but on sequencer i cannot see the camera track just after press spacebar to record it.
Another thing i forget to say is i’m rendering 2k because if i set to render 8k the stand alone crashes.
my engine version what i’m trying to do this is 4.14.3 and the machine i’m working is quite capable to do this i think? this is a i7 5820k with 32gb of ram and a gtx titan x maxwel.