TensorFlow

See how to deploy to end user? / allow cpu fallback for gpu version? · Issue #39 · getnamo/TensorFlow-Unreal · GitHub for a discussion about this problem. The plugin currently assumes you distribute the correct version by specifying it in the upymodule.json. There could be some ways of detecting the compute capability of the computer it runs on and selectively pull dependencies via pip and that enhancement would be a good contribution to the plugin. For now I’d recommend using the cpu version if you’re unsure of environment.

Alternatively if you do have a model that is more inference focused, there may be mileage from using the tensorflow c_api to run on say a frozen model or .pb file, there are native dll distributions available that come with the requisite gpu binds already directly embedded in a tensorflow.dll. In GitHub - getnamo/TensorFlowNative-Unreal: Tensorflow Plugin for Unreal Engine using C API for inference focus. I’m using those dlls and exploring how a more native tensorflow plugin might look. In it’s current state, it will correctly load the dll and you can call the c_api from c++, but there is currently no example or unreal specific helping code (Still very WIP). I suspect this approach should be more amenable to being embedded in games with far fewer dependencies, while letting you train/research using e.g. the python based tensorflow-ue4 plugin.

Finally another common approach is to run the tensorflow code on a server/cloud instance and to just use networking (e.g. the socket.io plugin) to pass data back and forth. That comes with it’s own drawbacks though (requiring internet connection, scaling costs).