If you use Omni Mode for Stereo. Size You need to change the texture size manually.
Open folder Camera_360\Textures\RenderTargetStereo and open, and change 4096 px -> 2048 or 2500.
In the next update, I will make this process automatic.
If this doesn’t help, please email me. Let’s see what else we can do.
Send me an email Lenina62email@example.com.
If I understand correctly. You have several cameras in the scene and would you like them to take turns rendering? Right?
This lesson shows how you can attach a camera to all cameras and start rendering via a sequencer. https://youtu.be/BO6LiAyHyaU
I hope this helps you.
I can’t figure out what’s going on from the picture https://forums.unrealengine.com/core/image/gif;base64
Can you show your camera settings? It looks like the post process settings are being reset.
If you use auto post-process configuration, then the all settings in post-process may be reset.
-Planar Reflection - is not supported.
-Sometimes seams may be visible. Not recommended to use ScreenSpaceReflection since the all camera’s in unreal egnine used screen space.
-Recomended use Raytracing reflection.
I found a problem in Unreal engine 4.25.
Which causes artifacts to appear in the scene.
For smoothing, I use the command
r. PostProcessAAQuality 0-without smoothing,
r.PostProcessAAQuality 2 - smoothing.
So that’s the problem with the engine.
I will create a query in Epic Games.
If you are bothered by artifacts and don’t need smoothing:
Open Actor Camera Rec 360 and open Construction script and destroy line 1.
We just purchased your plugin, and would like to capture a movie, but with no luck so far.
We tried with an Emtpy Level, with a ground plane and background sky. Added a Camera Actor, animated it in the Sequencer, and followed the steps in the tutorial below with the Camera_Point_360 Camera_Rec_360 objects. https://www.youtube.com/watch?v=HXAD…QwBbyOb_YGDcy1
When trying Sequencer “Render Movie” option, only the original camera’s view gets exported, with no panoramic projection.
When trying to use “Screen Shot System” “Screen Shot Rec” option in the Camera_Rec_360 object, a panoramic images get exported, but it does not match the animated camera’s route. The camera position does not change, but on the animated background sky the movement of clouds is visible. But the camera does not move, so the same frame gets exported 150 times.
Sometimes we can see other frames appearing in the exported images, but they have no proper order. For example first and last frame gets exported, and then the last frame gets exported 150 times.
We tried it on 2 different configs:
Ryzen Threadripper 1950X, 32 GB RAM, 2x Geforce GTX1080 Ti
Intel i7-5930K, 128GB RAM, 2x Geforce GTX 1070
What could be the problem? Is the hardware not enough for the capturing?
Is there maybe a sample scene available so we can try if the settings are messed up?
@ElizzaRF Thanks a lot for the support, and quick reply!
Before I posted my first comment, I used Get Target All Transformand to select the camera, but it didn’t seem to recognize, or update the connection between the two objects. (By the way, I’m using 4.25.1 version).
But anyways, I managed to get a capture working in the meantime by adding the “Camera_Point_360” object itself into the Sequencer like in the video link you provided, and managed to export a nice panoramic sequence. So this method seems to be working fine!
The tutorial you linked, by attaching “Camera_Point_360” to a CineCameraActor in the Sequencer, and deactivating the Track also works. Regarding this method: is it possible to animate camera exposure? I can’t add Camera Component to “Camera_Point_360”, and it does not seem to adapt animated exposure from the CineCameraActor.
Neither adding PostProcessVolumes to the level, seems to have effect on the rendered output.
Also, a question: Is it possible to narrow the Field of View, for Dome Camera, or VR180_Stereo outputs?
We are making a VR animation for Oculus Go, which seems to have an upper limit of video resolution 3840x3840 or 5760x2880, which does not seems to be enough for a VR180, so I’m thinking of projecting the rendered video on a smaller segment of a sphere (for example 135° by 135°), so like this we can make a better use of Oculus Go’s maximal video resolution.
Of course I can capture a full 180° at a higher resolution, and crop the image sequence afterwards in a video editing software to have a more narrow field of view, I’m just wondering if there is a more effective workflow for this.
Thanks for the great plugin and for the support in advance!
-5. Don’t forget. Infinite Extent(Unbound) for apply the post process to the entire level. And all working. Or Actor camera rec 360 paste to post process zone.
It is better to do this through the post process, so that all cameras that are used in camera 360 have the same exposure. And it’s better to do the Manual method.
I need to think about it. Thank you for the idea, but now unfortunately there is no such opportunity to reduce the field of view.
I am constantly working on improvements and always happy with new ideas.
Thank you for your kind words and purchasing the 360 Camera. I’m always ready to help
First, thanks for this plugin! I’m really excited about it.
I used the ArchViz Project from the Learn tab, but there is quite a bit of noise, especially on the ceiling and walls, but not so much in other places. I don’t have a top of the line computer but is good. i9 processor and RTX 2080 max Q (MSI laptop).
My best guess is, is that I can’t seem to put the Texture quality over 2048 without it crashing, I’ve tried changing AA, I’ve tried using the high-quality settings as well and those don’t seem to make a difference.
Good afternoon, michalex19!
Thank you for your purchase. Email me Lenina62firstname.lastname@example.org if you can’t apply these settings.
You have a great computer, but since raytracing is very demanding, you need to set the settings very carefully.
***You have several ways to create a rendering. ***
Hi, I just wanted to be sure if the Camera 360 is guaranteed to be “frame lock”. Means that the game-play fps is lock to the rendering fps (like 24 fps for instance) and it cannot have frame de-sync because of a game-play lag, … So that two different rendering have the exact same frames and timing. Can you guarantee that ? Or you know it’s not “frame lock” ?
Movide Render Pipeline https://youtu.be/a_vp7b5Blyg
Here all frames correspond to the frequency specified in these systems. If 24 frames, it will do 24 frames per second. If 30 frames, it will do 30 frames per second and so on. Freeze frame is not used. There are no missing frames.
HighResShot system https://youtu.be/J7uEiOhkH8o and Custom Rendering https://youtu.be/eUlgjTN-4Qg
The timeframe may differ by several frames, since the game is suspended. But not much. If the window is not active, frames may be skipped. This is why the window must always be active. You cannot use other applications.
All systems there is no desynchronization. That’s right, even if your scene is very heavy, only 5 frames per second, rendering will have a specified frame rate, such as 30.