[Feature Request] Official Machine Learning API Support. (e.g. cuDNN, tflow, tflow Lite)

I’m definitely not the expert in APIs or what mixes best with game engines, tflow’s just the most robust open source example I know of. Point is it’s an open source API that gives you high level access to fairly optimized parallel computing and signal analysis functions for machine learning, stuff that takes a while to write from scratch if you’re one of the handful with the know-how. It’s just a bunch of prewritten cuDNN and now TensorRT functions and wrappers really. You can always go in and optimize what are essentially templates since it’s open source, just like we do with Unreal already.

What I see is these frameworks are almost daily becoming much better generalized for average hardware and machine learning algorithms as a whole are getting easier to think with. As well, Julia language for example is ground-up written to take full advantage of the gpu, and you can have that talk with C++ and python programs to write shaders or other compute functions with better parallel computing support at the base, meaning it’s totally possible to have simultaneously optimal and generalized code in this day and age. I see dozens of use cases with ML for 3D in general, many of which are not feasible yet for the common hardware (thus a shareholder wanting results for this quarter and not after 2 years) but that’s not the point. The examples you listed again are just specific implementations of ML modeling where there could be a general framework. Look at the Universal Style Transfer math, it’s about as modular and general as you can get, thus it transfers really well to other problems. Most ML problems fall under different a few classifications (e.g. visual-spatial, recurrent/memory learning, attention, etc.) which have many different competing general models (e.g. CNNs vs CapsNets, or ResNet vs AlexNet, RNNs vs LSTMs, Manifold Learning vs Deep Learning, different rectifiers/decoders, different layering/routing, different transforms, denoising, etc etc) for doing the computing job with good performance and minimal loss. The rest is about adapting those to specific problems.

This is gonna be a big part of the future if we learn how to utilize it carefully. I’m pretty sure even fine-tuned shaders like the ones for the Last of Us could be found through ML optimization without the same kind of man-hours required for an insane assembly shader (and of course we won’t really have the hardware limitations that will necessitate such a job but there’s always the next level), there’s a huge window for exploration and many many layers in graphical computing where that kind of modeling can and should be applied. Granted, we have to use priors that took a ton of man-hours to base optimizations off of, but there is a toooooooon of code, as well as data and phd theses laying around. Like I’m baffled there aren’t more people recognizing what we’re sitting on here.

The creative space is a lot more than using the tools, it’s arguably just as much (if not more) in the creation of the tools.