We are trying out oodle network compression to help reduce our bandwidth usage, in order to more efficiently replicate some very large arrays via property replication. Due to the size of our very large array, replicating the array requires many packets. Oodle network compression is implemented as a packet handler component, thus applies compression on a per-packet basis. We’re not seeing as much benefit as we were hoping for since we’re still paying for the same amount of (uncompressed) packet overhead, and would like to try compressing the array buffer itself rather than the packets, which should result in fewer packets (and less packet overhead).
I’ve relaxed some of the limits in the OodleNetworkHandlerComponent code, since it assumes that the data being compressed should never be more than MAX_PACKET bytes. But I was hoping to get more info about this static_assert:
// Never allow DecompressedLength values bigger than this, due to performance/security considerations
static_assert(MAX_OODLE_PACKET_BYTES <= 16384, "Oodle packet max size is too big");
What are the performance / security considerations? Obviously, processing a larger buffer will take longer…but are buffers larger than 16384 bytes a degenerate case in the oodle code? What security implications could this carry?
Okay, you’ve touched on some hairy issues so bear with me as I try to unwind it all.
For context, the OodleNetworkHandlerComponent in UE was written by a 3rd before the Oodle team was at Epic. We (the Oodle team) are now at Epic providing direct support but we didn’t write the UE-side OodleNetworkHandlerComponent.
On the Oodle side of Oodle Network there is no max packet size and there is no technical problem with using larger packets.
On the UE side in OodleNetworkHandlerComponent, there are some hard-coded max buffer sizes (eg. “MAX_OODLE_BUFFER”). These are either on the stack, or in static arrays, depending on what version of UE you use. Because of the way they’re implemented, you don’t wan that to be a huge number, but it’s not really a technical limitation, just an implementation issue.
There are no performance or security issues with larger packets (assuming the buffer allocation model is changed). Preferably heap mem should be used, but not allocated and freed per packet, rather some scratch mem buffer used.
Also you are correct that doing the compression before splitting into packets would be better than the way it is done now (currently it’s done after splitting into MTU packets). I think the reason it’s done the way it is now is just because it was easy to integrate as a PacketHandler, and because most Epic games rarely send packets larger than MTU so splitting is not common in our games.
All that said, for very large data buffers being sent across the network, Oodle Network is usually not the best choice. At some size of buffer, using Oodle LZ (aka “Oodle Data”) is better, both in terms of compression ratio and speed.
Usually that crossover point is around 4 K bytes, but it depends on the data exactly where it is.
Our (Oodle) best practice advice for networking is to use Oodle Data for huge packets and Oodle Network for small packets.
For example a common case we’ve seen is when players join a new game or zone, they need to get a huge update to refresh their local state, that can be as large as 100 KB in some games. That should be sent with Oodle Data compresion. Then each tick after that the updates are typically small ( 1 KB or less ) and those should be sent with Oodle Network.
Make sure that the same large/small packet filtering rule is applied at training time, so the Oodle Network training data is built from the small packets it will see in runtime use. (in general you get maximum compression when the training matches real world data as well as possible)
So the conclusion is that if you are only sending huge transmission ocassionaly, it might be better to isolate those and use Oodle Data compression on them instead of trying to adapt your Oodle Network path to handle the huge packets.