We are attempting to enable multi process cooking by increasing the CookProcessCount
setting, but unfortunately we get crashes like this one when cooking:
LogCook: Display: LostConnection to CookWorker 3. Log messages written after communication loss:
LogSockets: Error: [CookWorker 3]: Could not WritePacket to Socket. A single message with Guid 4DF3B36BBA2F4E04A846E894E24EB2C4 is larger than MaxPacketSize 1073741823.
LogCook: Display: [CookWorker 3]: Cooked packages 9809 Packages Remain 141 Total 9950
LogCook: Display: [CookWorker 3]: Cooked packages 9827 Packages Remain 123 Total 9950
LogSockets: Error: [CookWorker 3]: Could not WritePacket to Socket. A single message with Guid 4DF3B36BBA2F4E04A846E894E24EB2C4 is larger than MaxPacketSize 1073741823.
LogSockets: Error: [CookWorker 3]: Could not WritePacket to Socket. A single message with Guid 4DF3B36BBA2F4E04A846E894E24EB2C4 is larger than MaxPacketSize 1073741823.
LogSockets: Error: [CookWorker 3]: Could not WritePacket to Socket. A single message with Guid 4DF3B36BBA2F4E04A846E894E24EB2C4 is larger than MaxPacketSize 1073741823.
LogSockets: Error: [CookWorker 3]: Could not WritePacket to Socket. A single message with Guid 4DF3B36BBA2F4E04A846E894E24EB2C4 is larger than MaxPacketSize 1073741823.
LogCook: Error: [CookWorker 3]: CookWorkerClient failed to write message to Director. We will abort the CookAsCookWorker commandlet.
LogCook: Warning: CookWorkerCrash: CookWorker 3 failed to read from socket, we will shutdown the remote process. Assigned packages will be returned to the director.
(This is followed byLogCook: Error: Package [redacted] can only be cooked by a now-disconnected CookWorker. The package can not be cooked.
for each of the remaining packages assigned to the CookWorker.)
We are on UE 5.4.3. On every occurrence it fails on the same GUID, which leads me to believe there is possibly some asset which is causing this problem. For now, we have managed to work around this particular issue by increasing MaxOSPacketSize in CompactBinaryTCP.h (removing the “further restriction” which is explained in the adjacent comment). We do not consider this a permanent solution though, and would much rather figure out if there is something we can do to our game data to properly fix this problem (and prevent it from happening again).