Unable to cache Niagara Compute PSOs for dynamically loaded actors. Game Hitches every single time the Niagara systems load with the "Encountered new compute PSO" log

I am trying to fix performance issues with stuttering that are continuously happening in my android game. I have followed the instructions at https://dev.epicgames.com/documentation/en\-us/unreal\-engine/manually\-creating\-bundled\-pso\-caches\-in\-unreal\-engine and no matter what I do I continue to see hitches and the “Encountered new compute PSO” log when my niagara systems are swapped out dynamically at runtime.

We are using unreal engine 5.2 but we’ve backported these changes into our engine codebase:

I set the value of `r.Shaders.ExtraData` to true and added a few extra logs during PSO expansion, builds, and when we hit the “Encountered new compute PSO” log and I can see lines related to my niagara system in the expansion and build logs.

I printed the hash of each PSO that was added during the build (after expansion takes place in our build pipeline) and I can cross reference that same hash to my runtime log where it says a new compute PSO was encountered. The hashes are the same, yet my game build ALWAYS attempts to create and save the new pso every single time I run it.

PSO caches work for us regularly.. it is often something we do when we are performance tuning but for some reason I just can’t get these particular niagara systems to play along with everything else.

I also run the console commands in my build before I start swapping out the actors with the niagara systems attached

`r.ShaderPipelineCache.PrintNewPSODescriptors 2`

`log LogRHI VeryVerbose`

They have been helpful in tracking things down.

So it all comes down to these questions:

  1. All your documentation says “compute psos are collected during builds, unlike the rest”. It isn’t explicitly stated that you CANNOT record compute shaders into a cache and then expand them… but it would be nice to know explicitly if this is not possible. So is it possible to record (-logPSO) compute shaders from my android game and then put those recording files and then use the PSO expand commandlet to convert that into an .shk file that my next build can use to avoid histches?
    1. If it is not possible, are there any good suggestions for best practices on ensuring that all my BP assets get the proper compute state cached during builds to avoid these hitches? Currently all these niagara systems are part of a library of content that a user can decide when to load. This library will grow over time and we want to let the user pick various content and have it become visible without any PSO related hitches.
  2. Are there any more debug tools I can use to track down why these niagara systems are not making into any cache file?

It seems highly likely that you’ll need more information to help me drill down to an answer here so just let me know what you need from me and I will do the best I can. I cannot send any samples from my project unfortunately and it has gotten big enough that trying to chisel off a portion of it to isolate and send would be extremely difficult.

[Attachment Removed]

Hi Zachary,

We are checking with the development team if any changes affecting this may have been implemented since 5.2. I’ll report back once I hear back. If you are able to provide a repro project , this would be most helpful. Also, is this common to both OpenGLES and Vulkan?

Best regards.

[Attachment Removed]

I intended to keep this ticket open since this is a very big issue for my team at the moment. I just got too busy and you guys beat me to closing the issue first. Any chance we an re-open this ticket? I am discussing with my team how to best proceed with this issue and whether we can provide samples or more specific leads to help us out.

(sorry for the delay)

[Attachment Removed]

Hi Zachary,

Is it possible to debug and see if PipelineStateCache::GetAndOrCreateComputePipelineState is actually called when PSO journal is pre-cached at runtime and also if PipelineStateCache::GetAndOrCreateComputePipelineState is actually performing operations for the Vulkan RHI? It may also be useful to confirm if you are seeing equivalent behaviour under GLES even if you are not targeting it for the final application.

Best regards.

[Attachment Removed]

A see this constantly in my log files I pull off my android device. It might be connected in some way

[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_Chunk105_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_Chunk103_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_Chunk101_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_Chunk100_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_Chunk0_SF_VULKAN_ES31_ANDROID.stable.upipelinecache
 
[2025.11.18-23.51.23:385][ 0]LogRHI: Could not open FPipelineCacheFile: ../../../WavePlatform/Content/PipelineCaches/Android/WavePlatform_SF_VULKAN_ES31_ANDROID.stable.upipelinecache

[Attachment Removed]

So we were able to verify that the shader platform on the Quest3 device is set to `SP_VULKAN_ES3_1_ANDROID`

We added some logs to SetComputePipelineState and have verified that it does log during the journaling step (or what I assume is the journaling step)

It appears to be correct so far for the android device

[Attachment Removed]

Hi Zachary,

Can you share your logs with the additional logging in the journaling step?

Best regards.

[Attachment Removed]

I have an update on our situation. It appears we have identified and solved the problem. Here’s the engine modifications we made to fix the problem:

1 - Modification to UE::PipelineCacheUtilities::LoadStableKeysFile in PipelineCacheUtilities.cpp- We disabled a few lines at the end that were used for the old csv format export because they were corrupting the paths for compute state objects. We just commented out these two lines

FString StringRep = Item.ClassNameAndObjectPath.ToString();
Item.ClassNameAndObjectPath.ParseFromString(StringRep);

2 - We added a new function FString FCompactFullName::GetPackageNameOnly() const in ShaderCodeLibrary.cpp - This allows us to retrieve the proper package name while collecting compute PSOs. The full function is as follows

FString FCompactFullName::GetPackageNameOnly() const
{
	if (ObjectClassAndPath.Num() > 1)
	{
		return ObjectClassAndPath[1].ToString();
	}
	return FString();
}

3 - Modification to SaveBinaryPipelineCacheFile in ShaderPipelineCacheToolsCommandlet.cpp- In the CreateAndDispatchWhenReady call beneath this comment “// first, kick off a task to prepare new StableMap that is easier to compare against” we modified the engine code to call our new GetPackageNameOnly() function instead of the old ToStringPathOnly() function so that it can accurately locate the compute object. That entire brick of code now looks like this

// first, kick off a task to prepare new StableMap that is easier to compare against
TMultiMap<FName, FSHAHash> StableNameMap;
FGraphEventRef StableMapConvTask = FFunctionGraphTask::CreateAndDispatchWhenReady([&StableNameMap, &StableMap]
	{
		for (const TPair<FStableShaderKeyAndValue, FSHAHash>& Pair : StableMap)	// could be parallelized (skip first N*ThreadIdx iterations on each thread?)
		{
			//Ensure we extract only the package name from ClassNameAndObjectPath, Compute PSOs may include subobject paths (e.g. “.../Core_0.GPUComputeScript”), which previously
			//caused TSet<FName> Packages lookups to fail because they were using normal package paths (see ShadersInChunk)
			FName PackageName(*Pair.Key.ClassNameAndObjectPath.GetPackageNameOnly());
			StableNameMap.Add(PackageName, Pair.Value);
		}
	}, TStatId());

With these changes, we found that our compute shaders are now correctly being cooked and the hitches during runtime go away. We also added a bit more logging to help us identify what is wrong if we ever encounter compute PSOs at runtime again.

The logs are in the same place as the GetPackageNameOnly() method we added. We just log this directly after that first for loop is complete to dump all the hashes out

for (const TTuple<FName, FSHAHash>& Elem : StableNameMap)
{
	UE_LOG(LogShaderPipelineCacheTools, Display, TEXT("StableNameMapEntry Package='%s' Hash='%s'"), *Elem.Key.ToString(), *Elem.Value.ToString());
}

WARNING: This log statement will print a massive amount of data into your log files during builds and it can make them a bit slower. I advise wrapping the dump loop in a CVAR that you only turn on when you are investigating an issue

Stéphane does this seem like a straightforward fix to you and your team? I am more than happy to make a pull request if this seems like a legitimate improvement to the engine

[Attachment Removed]

Hi Zachary,

At first glance, this all seems reasonable. Thank you for sharing your fix! Given the age of 5.2, I’ll need to run this by the development team to ensure this still applies post 5.7. I’ll circle back once I have details or a CL to reference in the engine main branch.

Best regards.

[Attachment Removed]

Hi Zachary,

Your proposed change has been reviewed and approved. CL# 49656792. Thanks for bringing this to our attention and for the research in finding a fix.

Best regards.

[Attachment Removed]

We are specifically targeting the meta Quest android headset using Vulkan at the moment. It’s been far too long since I’ve checked on our desktop platforms to give you an answer on whether those are also not functioning. I can chase that down as well.

[Attachment Removed]

Yes we can test it out. I’ll prioritize this for my team immediately

[Attachment Removed]

Excellent news! Great work!

[Attachment Removed]