Local llama and UE5 integration

I’m using the Universal Offline LLM plugin. I’ve only been able to get Llama-2 to work so far. Currently, it’s unacceptably slow. However, it’s going to end up running on a HPC cluster and be pixel streamed. I hope the plugin gets updated to work with Llama-3.3.