nDisplay render sync policy timeout

I’m encountering an issue where my packaged nDisplay build fails to launch when using either the Ethernet or NVIDIA sync policy. The same build launches and runs fine when the render sync policy is set to None.
When I switch to Ethernet or NVIDIA sync, the build launches into a blank screen and eventually times out after hitting the sync timeout barrier.
I have Quadro Sync II cards installed and properly configured on all machines. and framelock is active – green LED indicators are present on all sync cards and nvidia control panel
Firewall is fully disabled and all ports are open. Machines are all on the same subnet with identical Mosaic/EDID configurations.
I tried reinstalling UE and factory resetting the machines but the issue still persists. No clue why even Ethernet sync policy is not working at all.
A separate cluster of machines using the same network and same sync cards are working fine with the exact same build and config.

Has anyone encountered a similar issue or have ideas on what might be causing the failure specifically with sync-enabled policies?
Would appreciate any guidance or troubleshooting suggestions.

So, I don’t use the NVIDIA sync policy, but I do use Ethernet. I tend to find that went he project starts up with the black screen (and sometimes you can see the 3 squarish dots in the bottom right), it attempting to connect the cluster up together and whatnot.

If this fails for any reason, the Unreal engine just immediately closes. I tend to find that when this happens, it’s because either my DCRA (display cluster root actor) is misconfigured, or the nDisplay file that was pulled in to spawn a DCRA was misconfigured.

One thing you might try to do is review your Node_0.log, assuming you use the Switchboard command line arguments for a packaged development build. If you are not using the Switchboard command lines, then it would just log based on how your INI files are configured.

Either way, I would find the log file, ensuring it’s set high enough in verbosity, and see what all the display cluster classes are doing to gain a hint in what is happening.

This is where I tend to find issues.

Also, just out of curiosity, do you use host names instead of IP addresses in your configuration? I have found that hostnames fail because the display cluster classes, for some reason, utilizes IPv4EndPoint instead of just a generic class that can handle any type.

Unfortunately don’t have a solution for you, but I’ve run into the same issue in a few test setups.
As they were just test setups, I didn’t bother tracking down the issue and as with your case, they worked fine in another setup.

Maybe try different drivers or update the firmware of your sync cards? Don’t think this should change anything, but might be worth a try.