We are experiencing consistent texture pop due to streaming between camera cuts when using Sequencer. Per the Sequencer General FAQ, we have added preroll frames to both the section and subsequence, but this has not improved the issue. We also increased the Virtual Texture pool size significantly to rule out oversubscription, but saw no change.
Our setup uses a Shot Track, with camera tracks placed inside subsequences. Does this configuration support preroll functionality as intended?
Is there recommended guidance on appropriate PreRoll Frame values (minimum/maximum) to effectively address texture streaming issues?
1: The setup you have with camera cuts in subsequences is supported by the preroll functionality. Without knowing precisely what you did, make sure to set the section for the camera to have “Evaluate for preroll,” and then in the main sequence, set your pre-roll frame timing.
2: The amount of preroll frames can be heavily dependent on what is happening in your scene transfer. If you’re cutting to a scene that is effectively unloaded, you may need more time added to your preroll to handle that. Depending on your cut, you may only need a minimum of 10 frames for pre-roll, but the upward bound would require testing your specific scenes.
Can you tell us/share more about your scene setups and modifications for our virtual texture pool sizes? Are the texture popping happening with environmental objects that are already loaded, or with other objects as well?
I have set “Evaluate for PreRoll” on the CameraCutTrack in the subsequence and configured “PreRoll frames” on the corresponding section in the Main Sequence.
I also tested with and without “PreRoll frames” on the subsequence camera track, trying values up to 100 frames, both there and on the main sequence.
In all cases, PreRoll did not affect the texture popping.
Regarding the virtual texture pool:
I observed oversubscription (using r.VT.Residency.Notify 1) during some sequences and increased that pool size.
This reduced texture popping for some objects near the camera after a cut, but only for objects that the camera had previously pointed at—the improvement was seen the second time the camera cut to them.
Background objects were still showing issues.
The texture popping seems to involve both texture streaming and Lumen illumination settling after camera cuts, as shown in the attached video. Does PreRoll also help with Lumen settling, or is there a separate approach for that?
It does sound like your setup for preroll is fine, so we may need to debug your virtual textures more. Using r.VT.Residency.Show 1: You could see if your memory increase was good enough or not. If oversubscription is still happening once increasing the pool, you could use r.VT.DumpPoolUsage to what textures are hitting the hardest. From there, you may need to adjust your LOD Bias on those textures.
Regarding the texture popping happening on camera cuts, but only in the direction of where the camera was previously pointing, we signal the location of the next camera area and use the standard streaming systems to start loading. If your streaming system is already overloaded, nothing may happen.
The Luman issues are another case that we often see. The first step to addressing those issues would be to adjust your Final Gather and Lumen Scene Lighting update speeds on your post process volume or camera for both Lumen. Depending on whether you are using software raytracing or hardware raytracing, you might also need to adjust the update rate of your SDFs as well.
So to sum this up into a series of questions:
After you’ve increased your virtual texture pool size, how do the pools look? Are they still oversubscribed or are they being used and the biases might need tuning?
If you were to render the scene using the just the Lumen scene visualization, do you see a significant amount of noise?
Is the scene you are cutting to very dark? Lumen can get noisy in darker scenes and your exposure is catching up as well.
After increasing the VT pools, usage remained well below 50% during the problem shots, as confirmed by “r.VT.Residency.Show 1” and “r.VT.Residency.Notify 1”. If the virtual texture pool isn’t the limiting factor, what else could prevent PreRoll from working as expected? Are there any console commands we can use to help identify these issues?
I’ve also noticed that the camera in the subsequence is spawned via a “Spawnable Actor” binding on an object track within that subsequence. Could this setup cause issues with PreRoll?
Are there any console commands we can use to help identify these issues?
Not beyond what I’ve indicated. If you do not see any change to the pools when prerolling, that indicates that there is a different issue.
spawned via a “Spawnable Actor” binding on an object track within that subsequence. Could this setup cause issues with PreRoll?
Potentially, there is a specific setup with subsequences that could fail to send streaming info. If your scene setup looks something like this with subsequence 2 having a spawnable camera, we could fail to find the camera, and then not send the location to the streaming system.
[Image Removed]
If your scenes look like this a practical approach woul be to add some overlapping time in the sequence to spawn the other camera to fetch it’s data. This would require some buffer frames at the front of that sequence, so you would want to keep that in mind.
[Image Removed]
To further debug this and rule out any preroll-specific issues, can go to AnimatePreRollin**MovieSceneCameraCutTrackInstance.cpp,**and set a breakpoint. That is where we send data to the streaming manager. It will trigger the breakpoint as soon as it hits the preroll section and you can check if passes either of the tests. I would start there before going further into streaming manager or virtual texture setups.
And also for my curiosity, were you able to rule out Lumen issues with the above suggestions?
Adjusting the Lumen setting did make the scene brighter and some textures clearer from the start, but it did not eliminate the blurry-to-sharp texture transitions seen on some objects.
The issue does occur with back-to-back sequences that have no overlap, each spawning a new camera. However, a breakpoint in AnimatePreRoll() shows that it is receiving the correct future camera location and at the correct number of frames early.
In this scene, all objects are nearby and the main change is the camera’s facing direction. Does the StreamingManager take camera orientation into account, or only location? What steps would you recommend to further debug this issue?
Does the StreamingManager take camera orientation into account, or only location?
Just location, the streaming manager doesn’t use orientation for streaming.
There are some other cvars that you can tune to help. Specifically how many requests can be made per frame. We adjusted the defaults for these recently for 5.6. Here’s a short article talking about those.
One thing I haven’t asked is, are these textures RVT textures? These are typically used for landscape, but I had initially ruled them out based on the video you sent. If they are, there are some other options that you might need to tune.
No, we’re not using RVT textures in this case. Experimenting with the cvars, I didn’t notice any difference in behaviour. Is testing in PIE a reliable way to test them? I did use “r.VT.FlushAndEvictFileCache” to keep repeated runs from impacting each other.
Previously, I had mentioned that some textures would only pop the first time we see them, but after they were fully loaded (and with our increased virtual memory pool), there was no further popping on subsequent camera cuts to them. Now that I can confirm AnimatePreRoll() is being called, I added a PreRoll to the first shot to try to eliminate that initial texture pop. I delayed all the tracks by 100 frames from the start and set the preroll to run during this period.
Even though AnimatePreRoll() was triggered early, I didn’t notice any improvement with the initial texture loading. Is this expected behavior? Is there a way to debug or visualize the progress of textures being streamed in during PreRoll to try to pinpoint what’s happening?
Experimenting with the cvars, I didn’t notice any difference in behaviour. Is testing in PIE a reliable way to test them? I did use “r.VT.FlushAndEvictFileCache” to keep repeated runs from impacting each other.
Yes, you are doing the right thing there. Testing in PIE should be viable, but keep in mind that it will be slightly different from target hardware or any other builds where you may be stripping MIPs from your textures.
There are a couple of more things that we can attempt. First can you verify that those textures that are taking a few frames to load are virtual textures? If you are not oversubscribing your pools, those textures are not being handled in the same way.
Another method would be with your sequence or without, you can try turning on the vewport mode Lit > Optimization Viewmodes > Virtual Texture Pending Mips and see what textures might still be trying to load in those sections when you change cameras. If a texture is yellow, then the mip you want is not being loaded. If the texture is black, then there is an indicator that your texture is not marked to be a virtual texture and will not be tuned by that system. If you are still seeing a lot of yellow, which you shouldn’t since you’ve reduced oversubscription, you might need to adjust those CVars to see the changes.
Another option, you could try applying an experimental plugin that was introduced in Unreal 5.5. The Cinematic Prestreaming plugin was designed for use with a movie render queue, and its goal is to record an asset that can then be placed on the timeline in the sequencer to provide requests to the virtual texture streaming pipe ahead of when they are needed. It uses the above to help determine what hints it can send. I have not suggested this before as it is directly intended for MRQ and has not been tested for runtime gameplay.
Could you clarify what the color coding represents in the “VT Pending Mips” mode? Everything appeared greyscale on my end, but I’m not sure if that’s what you meant by “black.”
Also, from the StreamingManager’s perspective, does it make a difference whether a texture is virtual or standard when it comes to streaming it early during a PreRoll setup?
Greyscale is more accurate; if it is not turning yellow, that indicates that the virtual texture mips are not changing or needing to load. You can more easily see the issue if you fly around your level. If you are not using virtual textures for your environment then tuning the virtual texture parameters will not matter. If you want to visualize what textures are virtual in your scene you can use the ShowFlag.VisualizeVirtualTexture 1 to see which textures are virtual.
Apologies, if it ends up being the case that you’re not using virtual textures; I’ve been working under the assumption due to the first post, where it was noted.
As I move around the level, I notice some polygons flash green, but eventually everything settles to grey. Does grey indicate that the texture isn’t virtual, or does it mean it’s fully loaded? Would a virtual texture retain its original color?
Back to PreRoll, I observed that our scenes set the DefaultLevelSequenceInstanceData on the LevelSequence actor to offset its origin. However, the camera locations passed to the StreamingManager via AnimatePreRoll() are still the original values from the sequence and don’t account for the offset origin. Could this lead to the StreamingManager bringing in the wrong assets?
In this scene, the cameras are spawned by the track using a “Spawnable” binding, so their positions are determined solely by the track. I’ve verified that neither path inside AnimatePreRoll() is accounting for the offset origin. I’ll wait to hear from you if there’s a possible fix to handle the offset origin.
Update: After some additional testing, I found that removing the camera transform track results in the wrong position for both preroll and the scene, as expected. However, if I keep the transform track and force the “else” branch in AnimatePreRoll(), I get the correct offset position (but only after the second update).
No, I still haven’t noticed any improvement from using PreRoll with the cameras.
I wanted to test PreRoll on the first shot, since I thought it would have the most impact there. To do this, I added several blank frames at the beginning of the sequence. Interestingly, adding these extra frames made a more noticeable difference than using PreRoll or not during that time.
I suppose the real benefit of using PreRoll with the camera track might only show up when making large jumps in location?
Yes, depending on the size of the jump you may not notice too much of a difference in the texture streaming.
I added several blank frames at the beginning of the sequence.
Yes, this makes sense because when you initiate the sequence, typically, there is no preroll to build off of. You would need to inform the streaming manager that you’re about to cut the camera feed, either in the original way or using an alternative method.
And just for a heads up, we don’t have an official fix yet.