I would take whatever the runtime library is for the model I want to use, and build a Blueprint library around it.
If my model is Caffe (like LeNet,) I’d use the Caffe code. If my model is TensorFlow (like MobileNet,) I’d use the TensorFlow code. If my model is Darknet (like YOLO,) I’d use the Darknet code.
Often, the simplest way to do this is to define a very narrow interface – “load model from file X,” and “run inference based on inputs Y and Z” – and implement that interface in an intermediate C/C++ shim file. Compile the library separately, and then link it in as a header + binary lib in your blueprint function library, to avoid too much header confusion and unnecessary include dependencies.
I am in no way experienced with machine learning but I know some people who are, and the general trend today seems to be everyone hating TensorFlow and loving PyTorch. A quick google search suggest PyTorch has C++ API, so I guess it’d be a way to go. Pretty much everyone I know, and even those I don’t know pretty much say the same thing: “I can’t believe I’ve wasted so much time with TensorFlow and I can’t believe how much better PyTorch is.”
If the model you want to run is available as a TensorFlow model, but not as a PyTorch model, it doesn’t matter if PyTorch is easier to work with
What model are you planning to use? Or are you planning to construct and train your own model from scratch?