Assert: Assertion failed: State != D3D12_RESOURCE_STATE_COMMON [File:Engine/Source/Runtime/D3D12RHI/Private/D3D12Util.cpp] [Line: 1773]
From what I have been able to debug, it looks like it happens on the resource discard of ResolvedSpecularIndirect resources that are on the async queue. Are there any known issues around this resource or area of reflection code? When we move from Epic->High or use r.Lumen.AsyncCompute=0, the issue goes away.
At first glance, it sounds like one of the resources being discarded is likely in an incorrect layout for the discard and needs to be transitioned properly. Can you provide any more information from the time you hit the assert? A call stack or any log statements running with -d3ddebug and -rhivalidation would be helpful.
[2025.10.30-01.50.00:266][763]LogD3D12RHI: Error: [D3DDebug] ID3D12CommandQueue::ExecuteCommandLists: Using DiscardResource on Command List (0x00000194FA94B6D0:'Unnamed ID3D12GraphicsCommandList Object'): Resource state (0x4: D3D12_RESOURCE_STATE_RENDER_TARGET) of resource (0x000001951ADB5E30:'Lumen.Reflections.FrontLayer.SpecularIndirect') (subresource: 0) is invalid for use as a Discarded Resource. Expected State Bits (all): 0x8: D3D12_RESOURCE_STATE_UNORDERED_ACCESS, Actual State: 0x4: D3D12_RESOURCE_STATE_RENDER_TARGET, Missing State: 0x8: D3D12_RESOURCE_STATE_UNORDERED_ACCESS.
[2025.10.30-01.50.00:289][764]LogD3D12RHI: Error: [D3DDebug] ID3D12CommandQueue::ExecuteCommandLists: Using DiscardResource on Command List (0x000001960391CF00:'Unnamed ID3D12GraphicsCommandList Object'): Resource state (0x4: D3D12_RESOURCE_STATE_RENDER_TARGET) of resource (0x000001951ADB5E30:'Lumen.Reflections.SpecularIndirect') (subresource: 0) is invalid for use as a Discarded Resource. Expected State Bits (all): 0x8: D3D12_RESOURCE_STATE_UNORDERED_ACCESS, Actual State: 0x4: D3D12_RESOURCE_STATE_RENDER_TARGET, Missing State: 0x8: D3D12_RESOURCE_STATE_UNORDERED_ACCESS.
When the resource enters the discard, it has pixel/nonpixel and discard. When the transition looks up the state in GetD3D12ResourceState (D3D12Commands.cpp), it is being called on the async context. Due to that, it skips the state assignment it would get for SRVGraphics and asserts.
Still tracking this resource around RDG, but at least I have all the info that leads to the crash now.
Ok, I am glad that was helpful. If you require further assistance, please don’t hesitate to contact me. Once you have resolved the issue, could you consider submitting a pull request, as this validation error should no longer be present?
I think the solution for this is to remove the TexCreate_RenderTargetable flag. Posted a pull request here: https://github.com/EpicGames/UnrealEngine/pull/14036/files. That removes the -d3ddebug transition errors for discard when on the graphics queue and it removes this bug where the resource is being discarded from async compute.
Thanks for chasing down a fix and creating a PR. I know we recently made some improvements to better handle render target transitions during resource creation, and I wonder if our fix will achieve the same effect as your proposal in a more robust manner. Could you try integrating CL 48043548 into your build and verify that you are still experiencing the crash when your fix is removed?
Thanks for the CL! Unfortunately, I think this code is too far ahead of where we are at. Tried hand picking it in, but there are too many dependencies on code we do not have. Will it be picked up in 5.7.1?
I am not sure if we plan to implement this change as a hotfix, but I have reached out to some internal contacts to get you an answer. I will get back to you as soon as I have more information. I will be out of the office next week, so you might not hear from me until the beginning of December. I hope that is okay for you.
Okay, I’ve just been informed that this fix won’t meet the 5.7.1 deadline, but we will work to include it in the 5.7.2 hotfix. A public Jira issue is available here: Unreal Engine Issues and Bug Tracker (UE\-354891\). Once the link goes live, you can track the progress of the fix there. Would that be acceptable as well?
That works for me! Thanks Tim. I’ll just keep our fix locally until we can get to 5.7.1 and then I’ll try to patch in the CL and see if that fixes the issue for us.