Course: Neural Network Engine (NNE)

@gabaly92

NPU support: is currently very limited, defined on what is accessible through DirectML. It is basically limted to intel for now, as reported here. But with this being steadily improved, I am confident we will soon get wider support.

If you are interested in SNPE, you should definitively have a look at Qualcomm’s runtime here. I never used it myself, but Qualcomm did a great job in opening up NNE for mobile.

NNERuntimeORT upgrade: Yes, we try to increase the upgrade cadence to have more recent versions in future releases. Apologies for the delays.

The RDG interface: It will not be as easy as the GPU or CPU interface. The CPU and GPU interface are very close to what people are used from training environments like torch or TF. To do in-frame inference you need to know a little more details about Unreal, as it is very engine specific. In the short: The inference call EnqueueRDG will need you to pass a Render Dependency Grah builder, which you have access to e.g. when you add your own ViewExtensionBase class for e.g. post processing. So the interface is more powerful, as you run inference on frame resources without any CPU sync, but it needs a little more knowledge about UE

1 Like

Awesome, I appreciate the quick reply and explanation, this answers my questions

Hi @ranierin, is there a list of the supported platforms for NNE runtimes? I saw the message below in the NNE overview docs, but it doesn’t link to the supported platforms. For example, I could run ORTCpu on Win64 within the editor without issues, but when packaging with Win64 as the target platform, I got the Null pointer for runtime!

Not all runtimes are available on all platforms, even when the corresponding plugin is enabled. If a runtime is not available, or if it is available but does not implement the interface passed in the templated function, the returned weak pointer will be null. As runtimes can unload themselves, you should run a test for validity of the weak pointer before using it.

Runtimes typically register, unregister, load, and unload themselves along with their related plugin and module. However, runtime’s lifetime and registration is up to its specific implementation.

In the below code, if I run this in the editor, it works fine. However, when I run a package it fails because Runtime (created on line 97) is null. This is on UE 5.4

Hi @dr.shixo , there is no list, but NNERuntimeORTCpu should definitively also work on Win64 packaged builds. Which engine version are you using? Can you check if the NNERuntimeORT plugin is loaded properly in the packaged build? It is the first time we hear of someone having issues with this.

This is pretty awesome, I’d love to see the source for this were you to make it available.

Thanks for your helpful posts in this thread btw it has really helped me ramp up.

1 Like

Of course, This demo is part of a bigger project that I am working on, I am working on a plugin that will make it seamless for developers to deploy AI models like those and more, stay tuned

1 Like

Here is another demo that that showcases what can be built with the plugin, the key here is that the models (SpeechToText model and Vision Language model) are all running locally on my laptop, no cloud APIs needed (no ChatGPT no Claude, non of that, all local) and it’s designed to be plug and play

Matrix Awakens + CORES

3 Likes

I tried to manually compile the plugin “NNERuntimeORTCpu”:

D:\Epic Games\UE_5.3\Engine\Build\BatchFiles> .\RunUAT.bat BuildPlugin -plugin=“D:\Epic Games\UE_5.3\Engine\Plugins\Experimental\NNERuntimeORTCpu\NNERuntimeORTCpu.uplugin” -package=“C:\Users\***\Desktop\NNE_Build”

But I received an error: “Module ‘VertexDeltaModel’ (Engine Plugins) should not reference module ‘NNEUtils’ (Plugin). Hierarchy is Plugin → Project → Engine Programs → Engine Plugins → Engine.”

My hard disk does not have enough space to compile the Unreal source code; is there a correct way to compile just this plugin?

The motivation for changing the source code was that I had a relatively complex onnx model that reasoned well in python. However, when I load my onnx model by NNE, the Unreal Engine editor crashed and the log show about :

When an error occurs: LogWindows: Error: appError called: Assertion failed: false [File:D:\build++UE5\Sync\Engine\Plugins\Experimental\NNERuntimeORTCpu\Source\ThirdParty\ORTHelper\Private\ORTExceptionH andler.cpp] [Line: 16]
ONNXRuntime threw an exception with code 6, e.what(): "Exception during initialization: D:\build++UE5\Sync\Engine\Plugins\Experimental\NNERuntimeORTCpu\Source\ThirdParty\onnxruntime\Onnxruntime\Private\core\ framework\allocation_planner.cc:1737 onnxruntime::PlannerImpl::BuildExecutionPlan num_logic_streams_ == 1 && ! stream_nodes_[0].empty() was false.

The problem is in the first line of the following code:

1. ModelInstance = Runtime->CreateModel(ModelData)->CreateModelInstance();

2. if (! ModelInstance.IsValid())

Here is some information about the model:

  • format

    • ONNX v8
  • producer

    • pytorch 2.1.2
  • imports

    • ai.onnx v16
  • graph

    • main_graph

Other onnx models that are also exported using the above Settings work fine(In python, used to reason a functioning onnx version is 1.16.0, but I noticed that the onnx version used in ue5.3 is 1.7.1).

I tested the latest release version of UE5.5.1 and found that it runs smoothly now. It seems the issue was related to the version of ONNX Runtime.

Based on the discussion above, UE5.5’s use of version 1.17.1 is indeed a significant improvement, as it now supports a wider range of models.

As far as I know, some PyTorch models cannot be exported correctly with an opset_version < 16, and this parameter requires ONNX Runtime to be above a certain version for inference deployment.

Additionally, I would like to know:

  1. Aside from compiling from the Unreal Engine source code, is there any other way to build engine plugins with intricate dependencies, such as NNERuntimeORTCpu?
  2. I noticed that in the current version of UE5.5, NNERuntimeORT is no longer categorized as “Experimental.” After this, is there even the slightest possibility that the official team might extend this runtime as a plugin for earlier versions like UE4? I mean, something similar to the “Unreal Engine Visual Studio Integration Tool” that can be released on the Marketplace/Fab?:smiling_face_with_three_hearts:

Hi @HelloJXY ,

  1. As you discovered, the issue seems to be a conflict between the ORT version and the opset version, Thus, I would not recommend to compile the plugin individually just because of that, as there are cross references and other factors that may cause problems later.
  2. We don’t plan to back-port our runtimes to older versions but try to focus on the latest and greatest to be able to keep up with the fast pace in ML. Sorry!
1 Like

Hey NNE community,

Just wanted to share that there is a new tutorial out, showing how you can run inference on RDG.

Also, in case you missed it at Unreal Fest 2024 in Seattle, please checkout this video. Qualcomm created their own NNE plugin , which let’s you run neural networks on mobile phones using SNPE hardware acceleration (NPU).

3 Likes

Hello @ranierin. I am attempting to load an onnx model and run inference using the NNE plugin for Unreal Engine 5.5. Although I have been successful in loading the model, when I extract data from the OrtOutputTensors I always get out-of-range values. I expect all float values from the model to be between 0-1. The model has been run on other systems and has been verified to only output values in the expected ranges. Therefore, I feel like there is something wrong with the way I am extracting the data. I was hoping you could steer me in the right direction. Any help is greatly appreciated.

Here is the code I am using to pull the data from the tensor. This code is in NNERuntimeORTModel.cpp. I added to it starting on line 400.

Thanks,
Beau

Hi @BeauMS24

First an advice unrelated to the problem: Try to avoid usage of any std functionality and rather use the infrastructure provided by UE (in specific TArray instead of std::vector). This will save you from a lot of problem when producing code for multiple platforms.

Now for your problem: You should not need to add any code to our plugins. getting the results is exposed in the public API. When you call RunSync, you pass both input and output tensor bindings. Those bindings point to memory that you as a caller own (typically, you have somewhere your TArray and then you create bindings pointing to the memory inside the TArray). RunSync will read from those bindings and write the result back (if there is enough space). So after you call RunSync, the memory to which the binding points too will contain the outputs.

Btw. I think out-of-range indicates that you try to read outside the elements inside the array, and not that the contained values are in a wrong range.

Also please go through our quick start tutorial which describes how you setup inputs and outputs.

The Neural Network Engine (NNE) course offers in-depth knowledge on designing, implementing, and optimizing neural networks for various applications. It covers the fundamentals of machine learning, deep learning, and neural network architectures. Students learn how to build and train models using frameworks like TensorFlow and PyTorch. The course also delves into techniques for improving model performance, handling large datasets, and deploying models in real-world scenarios. By the end, participants will be equipped to apply neural networks to solve complex problems across industries such as AI, healthcare, and finance.

That’s Amazing :fire: :fire: :fire:

1 Like

Ahh…I see. I will update my code and see if I can get this working. Thanks!

1 Like

The potential for AI in Unreal Engine is unlimited and untapped, this is only the beginning :fire:

1 Like

Hi… I currently face the same issue. Did u by any chance find a solution?
@ranierin Unreal version 5.4.4, do you have any clue why this could be? I have tried now few hours to find out what’s wrong but can’t see any reason, its always null on packaged project, otherwise working fine.

Hey @Towwi, this should definitively not happen. I noticed in @dr.shixo 's code, that he only gets the runtimes if there is a valid model. Is that the case in your code too? Can you make sure that the model is valid in the packaged build? One thing I could imagine is, that your model is not referenced by a primary asset (e.g. a level, or an actor in a level) and thus it is not cooked and missing in the packaged build.

One way around it is to either reference it or to add it in a folder which is always cooked (check the project packaging settings for ‘Directories to Always Cook’).

If it is not this, I would try to see if the ORT module is present in the packaged build using the module manager. If this is not the case, there could be an issue with the plugin settings.

If non of this helps, you probably need to build the engine from source and step through the ORT module loading code.

Let us know how it goes!

1 Like