AnimationBudgetAllocator doesn't utilize Interpolation on registered components and it feels like framerate drop. What is a proper way to address it?

In Unreal insights traces, you can see that worker threads contain only animation graph evaluations and don’t contain any skeletal mesh interpolations.

Evaluation of the animation graph is very expensive.

Skipping the animation update makes players feel like there’s an FPS drop.

Interpolation between frames looks much better than frame skip, but has a very small cost compare to actual evaluation of the animation graph.

[Image Removed]

What would be a proper way to set AnimationBudgetAllocator to use interpolation when the significance of the meshes is auto calculated?

Steps to Reproduce

  1. Create GameAnimationSample project for Unreal Engine 5.5
  2. Activate plugin AnimationBudgetAllocator in GameAnimationSample project
  3. In CBP_SandboxCharacter, change Component Class for component Mesh to be SkeletalMeshComponentBudgeted, then set Auto Register with Budget Allocator & Auto Calculate Significance to true
  4. Edit DefaultLevel’s level blueprint to Enable Animation Budget on BeginPlay and spawn 20 animated actors in the level (as on the screenshot)[Image Removed]
  5. Start PIE (Play In the Editor) and execute two console commands: stat AnimationBudgetAllocator and a.budget.debug.enabled 1
  6. The statistics show that no SkeletalMeshComponents were interpolated & animation looks like the game has a frame rate drop[Image Removed]

Hi, from your description, it sounds like you want some (or all) of the budgeted meshes to run in interpolation mode when their significance is being autocalculated - is that correct?

By default, the budget allocator divides the budgeted meshes into three buckets: one for those always being ticked, one for those being interpolated, and one for those being skipped entirely this frame.

So it could just be that the update time for the meshes that are being ticked is high enough (or the budget set in the allocator is low enough) so that the budgeter is calculating that there isn’t enough time available to interpolate any meshes and is setting them all to be skipped this frame. To test that, I would reduce the number of meshes in the level and increase the budget that you’ve set on the allocator (you can do that via a.Budget.BudgetMs).

Another possibility is that FAnimationBudgetAllocatorParameters::MaxInterpolatedComponents has somehow been set to 0. That value is a hard limit on the number of meshes that can be interpolated so when that value is exceeded, any further meshes will fall through to the skip bucket for that frame.

A slightly different option with this - if you only ever want meshes to interpolate as a minimum - is to set the FAnimBudgetAllocatorComponentData::bNeverThrottle flag for your mesh. You can do that by calling USkeletalMeshComponentBudgeted::SetComponentSignificance and passing in the relevant flags. (That function is only available in native code at the moment.) You would also need to set MaxInterpolatedComponents with this approach. Be aware that there was an issue with the allocator prior to 5.6 where those flags would be overwritten if auto-calculate significance was enabled for the mesh. This was fixed in CL 38449236 if you want to look at the change.

You may also just want to debug the code to see why the meshes aren’t being put into the interpolation bucket. If you do want to do that, the code to look at is FAnimationBudgetAllocator::CalculateWorkDistributionAndQueue. Specifically look for the for loop in that function with this comment:

// Bucket 2: interpolatedThen you should be able to see how many components should be added to the interpolation bucket. If it’s zero then you can track back and see why InterpolationIndexEnd is the same as SortedComponentIndex (ie. zero meshes to interpolate).

Good day, Euan!

Thank you so much for your response and I hope you had a great weekend! :slightly_smiling_face:

I ran the tests in GameAnimationSample to confirm that our project didn’t introduce any issues.

From the results, increasing a.Budget.BudgetMs doesn’t change the number of interpolated components, but it does increase the number of ticking components. This suggests that there’s no budget distribution between Component Tick and Component Interpolation.

I’ve attached a few screenshots for reference.

[Image Removed][Image Removed][Image Removed][Image Removed]

FAnimationBudgetAllocatorParameters::MaxInterpolatedComponents is set to 16 in GameAnimationSample[Image Removed]

We may look into taking in CL 38449236 and creating something to manage bNeverThrottle.

When I debugged the code, it seemed there’s no clear way to distribute a.Budget.BudgetMs between ticking components and interpolating components — bucket 2 consistently ends up with no time left.

To explore this, I introduced a Ratio variable, and it seems to work as expected:

Before:

const float WorkUnitsExcess = FMath::Max(0.0f, TotalIdealWorkUnits - WorkUnitBudget);After:

// the Ratio variable is Parameters.TickBudgetRatio
const float WorkUnitBudgetTick = WorkUnitBudget * Parameters.TickBudgetRatio;
 
// Ramp-off work units that we tick every frame once required ticks start exceeding budget
const float WorkUnitsExcess = FMath::Max(0.0f, TotalIdealWorkUnits - WorkUnitBudgetTick);

This way, we allocate some time for ticking components on the main thread, while also reserving part of the budget for component interpolation.

What do you think?

Best wishes,

Kirill

Hi, what you’re seeing in those screenshots is in line with what I’d expect when the allocator is running with the default parameters.

By default, CalculateWorkDistributionAndQueue will tend to prioritise fully ticking units where there is enough budget. The cost of that is that fewer of the non-ticked units will be interpolated, and more of them will be throttled entirely (not ticked). So I would expect to see the fewest number of components in the interpolated bucket. But you can bias this behaviour towards fewer full ticks and more interpolation, which sounds effectively like what you’re doing with the ratio value that you added.

You can do this with FAnimationBudgetAllocatorParameters::AlwaysTickFalloffAggression and FAnimationBudgetAllocatorParameters::InterpolationFalloffAggression. If you increase the value of the first property, it will reduce the weighting towards full ticks (by effectively increasing the ‘cost’ of each ticked mesh). If you then decrease the second property, it will increase the weighting towards interpolation (by effectively decreasing the ‘cost’ of each interpolated mesh). The expected values for both of those properties are between 0.1 and 0.9.

I wouldn’t change how WorkUnitsExcess is calculated, like you suggested, since that value is supposed to represent the number of meshes that would be over budget if fully ticking all the mesh components. If you change what that value represents by changing how it’s calculated, I wouldn’t be surprised if some of the later calculations break.

Good day, Euan!

Thank you for the answer!

With the new values AlwaysTickFalloffAggression 0.9 & InterpolationFalloffAggression 0.1, interpolation started to kick in and there’s a big difference now:

4 components, default vs new values:

[Image Removed][Image Removed]

7 components:

[Image Removed][Image Removed]

15 components:

[Image Removed][Image Removed]

With doing those tests, I’ve realized that I’m probably asking the wrong question.

If you take a look at screenshots with 4 components, both cases are very close to the budget limit and second case started to use interpolation for one of the components.

With AnimationBudgetAllocator I’m trying to find settings, where 3 out of 4 components would use interpolation and only one component would tick.

Tick & Interpolation of the component have very similar cost on the Main Thread but there’s very big difference in cost on the worker thread.

On the attached screenshot above, the tick calls evaluation of the animation graph that takes 800 µs:

[Image Removed]

And interpolation costs 20,7 µs on the worker thread:

[Image Removed]

So, I would like to find a setting to distribute animation budget time from Ticking to Interpolation, so we can reduce the load on the WorkerThreads for low spec hardware.

Visually Interpolation should look better than Frame Skipping as frame skipping on low specs feels like a frame rate drop.

What do you think?

Ok yeah, that all makes sense. It sounds like your use case is slightly different to the one that the allocator was implemented for. With the default behaviour at least, the main use case for the allocator is reducing game thread work. The reason for that is that it tends to be the main bottleneck that we see on projects, rather than the worker thread work. And like you’ve seen, interpolation is just about as expensive on the game thread as the full animation update since we still have to tick the mesh. So the intended behaviour with the allocator is to look at the budget for the frame, tick as many meshes as possible within that budget, then with the leftover budget, interpolate if possible and skip/throttle on the rest of the meshes.

You can bias with the values I mentioned previously, but only to an extent, because they cap out at the number of excess meshes that don’t fit within the budget - ie.

const float WorkUnitsToRunInFull = FMath::Clamp(WorkUnitBudget - (WorkUnitsExcess * Parameters.AlwaysTickFalloffAggression), (float)NumComponentsToNotSkip, (float)TotalIdealWorkUnits);So WorkUnitsToRunInFull will never be less than WorkUnitBudget - WorkUnitsExcess, whereas in your use case it sounds like you do want to intentionally tick fewer meshes to be able to interpolate more.

We don’t have anything within the existing code that would allow you to do this, so I think you will have to fall back to making engine modifications here. It would be something similar to the code that you mentioned previously, but I would look at modifying the value of WorkUnitsToRunInFull rather than WorkUnitsExcess because I think that would have fewer knock-on consequences to the math later in the function. It would still need a good amount of testing. My concern is that you could end up in a situation where the math is wrong and you don’t calculate the budget correctly. But I think just modifying WorkUnitsToRunInFullwill be ok.

Actually, just looking at that code again after I sent the previous message - you could just try bumping AlwaysTickFalloffAggression above 1.0. We haven’t tested that but it would have the same effect as applying some kind of modifier to WorkUnitsToRunInFull.

Good day, Euan!

Thank you very much for your insightful answers, they were extremely helpful!

I agree that this application is quite different and specific compared to other cases, but our goal is to squeeze as much frame rate as possible from the i5-8600K (6 threads) while still maintaining some level of visual quality.

Thanks again for your valuable input!

Good to hear that was useful! I’ll close out this thread for now but feel free to reopen it if you have related questions in future.