Bringing the Alpaca AI model to Unreal Engine. A couple of issues though...

Hi guys. It’s me WaynedotCool. The author of the GPT-2 chat plugin,

Found here:

I’m currently trying to figure out a local method to bring this newer ai model 13b alpaca to Unreal Engine. Because my current method of using a remote model, using Flask to host the model on a Linux server with C++ http request nodes/modules setup in the engine for communication would require a ton of RAM and the cost for said server would be quite expensive. I’d love to get this local design I’ve implemented for Alpaca which is super close to GPT 3.5 on the market, but I’ll need some help with either cheaper hosting, or perhaps a local configuration. So far, I’ve heard of one possibility to combine the thirdparty software with my setup using Docker. That’s the biggest most complicated variable I’m trying to over-take. When setting up a local model configuration, you have to create a directory with the third party source files within your plugin directory. And Visual Studio won’t compile with the PyTorch source code, due to all of the alike subdirectories found in the directory containing the PyTorch source code. And then, there’s also a few complications with the transformers module too. I’ve talked to my buddy GPT-4 recently whom recommended Docker. Anyone else have much experience using Docker? I’m curious about how to perform such package or container with the thirdparty software that I’ll need…