Hi there!
I have a few newb questions about this (apologies):
Is there C/C++ API support? As in, do I have to use blueprints to communicate with the plugin, or can I just write code?
What format does a trained network take in this scenario- is it a .uasset? I’m more interested in using this library as a way to run already-trained tensor flow networks in engine rather than as a way to train, so packaging is an important question.
You mention only supporting Windows platform at this point. Is that for training only, or also for running a network? Theoretically it shouldn’t be that big of a deal to get that aspect running on consoles? Or is this a dependency on the Python plugin?
The plugin is structured around a blueprint actor component called *Tensorflow Component *which wraps threading and communication to an embedded python layer. This way all of your machine learning can use largely unmodified tensorflow python files and on the unreal side you just have to worry about how to structure your data for your model. Basic usage instructions can be found here: https://github.com/getnamo/tensorflo…rflowcomponent
Because the component is put together in blueprint it would be troublesome to use it in C++ directly, you may need write a wrapper that has a C++ base and is overwritten in blueprint. It would probably make sense to refactor the Tensorflow component into a C++ base so that it can be called from both ends (a good enhancement issue).
The trained network will be your usual .pb or checkpoints as all your machine learning should be vanilla tensorflow. The plugin already does package correctly.
It is limited to windows atm, because it uses a cmd subprocess to handle pip dependencies without blocking anything, this can probably be expanded to multiple platforms (Source can be found https://github.com/getnamo/UnrealEng…ipts/upypip.py). The pip and python dependency would make console support hard atm.
That said wider support has been on my mind, specifically inference.
Originally I thought of including a tensorflow dll directly in this plugin, but the api was moving too rapidly back then and there were no dlls to download, so the python approach was chosen to allow for easy updates to latest builds. The scene has changed since then and while the repository does include a tensorflow dll, I believe it will probably be best to split native tensorflow into a fresh plugin that will be inference focused, something like tensorflow-native-ue4? It would probably use the c api with some c++ wrapper and simplify loading your model, running inference and getting data in ue4 format. I made a blank plugin just now for that: GitHub - getnamo/TensorFlowNative-Unreal: Tensorflow Plugin for Unreal Engine using C API for inference focus., but it will take some time to bring it to a functional build. Contributions welcome if you want to help, there would need to be native builds of the library for whatever target hardware you’re looking for.
python is space sensitive, you have to either use tabs or spaces consistently. Check your file in something like sublime text to see what spacing you’re using.
Thanks! I take it this means this plugin could be used for inference on tensorflow networks that have been built outside of Unreal, assuming we can find a way to package the .pb files? I imagine there’d still need to be some work to get tensorflow itself to compile on consoles as well, but this is a good start. Thanks!
That’s the idea, here is some example c_api for loading a graph and running a session: https://github.com/Neargye/hello_tf_…ession_run.cpp. Wrapping that into a more unreal way would be the next step to make things easier.
Keep in mind that another way people do machine learning is to run it as a cloud service and just pipe data to your server and get results back, then you don’t have to worry about compatibility for your platform of choice.
Suppose I package a version and I use CUDA and cudNN but the PC it would be running on doesn’t have CUDA or cudNN but has a card that supports them, would it still work?
How would it work on a system not having a GPU can it detect and run the processor version?
See how to deploy to end user? / allow cpu fallback for gpu version? · Issue #39 · getnamo/TensorFlow-Unreal · GitHub for a discussion about this problem. The plugin currently assumes you distribute the correct version by specifying it in the upymodule.json. There could be some ways of detecting the compute capability of the computer it runs on and selectively pull dependencies via pip and that enhancement would be a good contribution to the plugin. For now I’d recommend using the cpu version if you’re unsure of environment.
Alternatively if you do have a model that is more inference focused, there may be mileage from using the tensorflow c_api to run on say a frozen model or .pb file, there are native dll distributions available that come with the requisite gpu binds already directly embedded in a tensorflow.dll. In GitHub - getnamo/TensorFlowNative-Unreal: Tensorflow Plugin for Unreal Engine using C API for inference focus. I’m using those dlls and exploring how a more native tensorflow plugin might look. In it’s current state, it will correctly load the dll and you can call the c_api from c++, but there is currently no example or unreal specific helping code (Still very WIP). I suspect this approach should be more amenable to being embedded in games with far fewer dependencies, while letting you train/research using e.g. the python based tensorflow-ue4 plugin.
Finally another common approach is to run the tensorflow code on a server/cloud instance and to just use networking (e.g. the socket.io plugin) to pass data back and forth. That comes with it’s own drawbacks though (requiring internet connection, scaling costs).
Another simple question although this is more related to the python plugin but I didn’t know where to ask the question.
I have a py file inside a folder inside scripts, or any folder inside content folder.
Then I add this path to additonal path under python but when making a pyactor and adding that class to python module, I am unable to make it work.
I have a trained model but using python plugin is a huge pain to test.
Each time i compile without closing the project, the socketIO plugin crashes the project.
That function doesn’t return as you think. The python TensorflowComponent wraps callbacks such that they will automatically call back on *json_input_gt_callback *whether you use multi-threading or not. If you do have multi-threading on you wouldn’t be able to receive the answer within a function callback anyway. You need to listen to json_input_gt_callback function which has the json results you’re looking for. See https://github.com/getnamo/tensorflow-ue4/blob/master/Content/Scripts/TensorFlowComponent.py#L101 for the python logic handling this. You can modify that section to return the results directly if you don’t use multi-threading.
I generally haven’t used this plugin with c++ inference, typically developing and calling json input from BP is more amenable to ML prototyping. That said I think a refactor is in order which will allow the tensorflow component to be called natively which would simplify cases like these (and using the same api to call remote python servers), this refactor may be a while though as I don’t have free opensource time in the near term.
I’m getting an issue of ImportError: DLL load failed: The specified module could not be found. similar to getnamo above, however I have tensorflow gpu successfully installed on my computer. I installed tensorflow with Anaconda. Could that have something to do with the plugin not finding it?
I managed to get a TF2 version of addExample.py working, but I’m a bit confused on how to correctly implement
the operation self.c = self.a + self.b
import tensorflow as tf
import unreal_engine as ue
from TFPluginAPI import TFPluginAPI
class ExampleAPI(TFPluginAPI):
#expected optional api: setup your model for training
def onSetup(self):
self.a = tf.Variable([0.0], tf.float32)
self.b = tf.Variable([0.0], tf.float32)
self.op = tf.Variable(True, tf.bool)
pass
#expected optional api: parse input object and return a result object, which will be converted to json for UE4
def onJsonInput(self, jsonInput):
print(jsonInput)
self.a = tf.dtypes.cast(jsonInput'a'], tf.float32)
self.b = tf.dtypes.cast(jsonInput'b'], tf.float32)
if self.op:
return tf.add(self.a, self.b).numpy().tolist()
else:
return tf.subtract(self.a, self.b).numpy().tolist()
#custom function to change the op
def changeOperation(self, type):
if(type == '+'):
self.op = True
elif(type == '-'):
self.op = False
def getVersion(self, jsonInput):
ver = tf.__version__
print(ver)
return("GPU Available: ", tf.test.is_gpu_available())
#expected optional api: start training your network
def onBeginTraining(self):
pass
#NOTE: this is a module function, not a class function. Change your CLASSNAME to reflect your class
#required function to get our api
def getApi():
#return CLASSNAME.getInstance()
return ExampleAPI.getInstance()
Love your plugin. I’m thinking about making something quite similar that merely visualizes different information from TensorFlow in an Unreal Engine 3D world.
I don’t know exactly how it’ll look yet, but some combination of these two videos:
I’ve looked pretty hard and haven’t seen anything close to what I’m thinking of building. If I move forward with it I think your plugin will help jumpstart my progress, thanks!
The general idea is that you should be able to easily boot up a server (local or truly remote) and do remote dev work as if it was running UnrealEnginePython allowing you to train in more typical linux environments with no restrictions on either machine learning library or versions. The matching unreal frontend would be very lightweight and should be available on most os platforms. This should reduce setup, mismatch, and version headaches. The tensorflow-ue4 (unrealenginepython environment) plugin would still be one of the possible backends and the api will be very similar to the old one, just with the option of swapping it for a remote or a native variant without other code change.
In addition there is planned work on the tensorflow-native-ue4 plugin to use similar base api as the remote/python one, but with an inference focus. This would enable you to package e.g. .pb file and run inference on your trained model at native speeds. There is a possibility to expand this API to support more than inference directly from BP, but it wouldn’t be the focus at this time.
If you’re using the library, feedback on this new architecture work is welcome as I want to make sure it covers use cases you’d be interested in.
This is awesome. I want to train different with AIs with different Game Lore/World Datasets to drive NPC Chatbots. I’m open to any conversation on this topic.