Course: Neural Network Engine (NNE)

@jvin1011 Yes, there are different runtimes with different platform support and also different format support. Unfortunately we do not have any runtime yet that supports the tflite file format. However, there are a number of model converters out there which are able to convert tflite to e.g. onnx (see here for an example) and then you could use existing runtimes.

Of course it is always possible to use any other inference engine directly (also tflite) without going through NNE. The downside is that you have to integrate the libraries yourself, and if it does not run on all target platforms that you are interested then you need to do the same for other runtimes, creating a huge fragmentation in your code. Or. e.g. if you want to use NPUs you will need to add a runtime for each hardware vendor you plan to run on as well.

That is why we started with NNE: Provide you with a single API to be able to access all platforms the same way and be extendible to include future runtimes as well.

So long story short: I would try to export or convert your model to onnx and save yourself the pain to add your own runtimes :wink:

1 Like

Hi Nico, I have exported my ONNX model in FP16 format. When I attempt model inference using the RDGHlsl backend, I encounter the following error.

[2025.05.20-06.53.23:601][ 0]LogNNERuntimeRDGHlsl:
Warning: Input at index '0' (from template T0) is of type
"Half' witch is not suDported for that input.
[2025.05.20-06.53.23:601][ 0]LogNNERuntimeRDGHlsl:
Warning: OperatorRegistryfailed to validate
operator:Slice
[2025.05.20-06.53.23:602][ 0]LogNNERuntimeRDGHlsl:
Warning: Model validator 'RDGModel validator'
detected an error.
[2025.05.20-06.53.23:602][ 0]LogNNERuntimeRDGHlsl:
Warning: Model is not valid.
[2025.05.20-06.53.23:602][ 0]LogNNERuntimeRDGHlsl:
Warning: Cannot create a model fromthe model data
with id B67298DC456900C2B7797DA66DF4EA2F
[2025.05.20-06.53.24:736][0]LogTemp: Error: Could not
create the RDG model

Is FP16 inference not supported in NNE, or is it specifically unsupported with the RDGHlsl backend? The same model when exported in FP32 works fine using RDGHlsl backend but the performance is a little slow, so I’m exploring ways to optimize and improve it.

Hi @jvin1011,

Yes, fp16 support with the HLSL runtime is still limited. Which engine version are you using?

If you work on a DirectX based system you can use the runtime NNERuntimeORTDml where you have a high chance to get the model running. Also, depending on the model, DirectML can access tensor cores giving you an additional boost.

This may not be suited for your final product if you aim for multiple target platforms, but at least will help you assess the performance of your model.

Best
Nico

I am using version 5.5.1 of Unreal Engine. Since I am running on Windows, I initially tried using DirectX support, but it didn’t work for me, which is why I’m working with the HLSL runtime. Overall, the HLSL runtime works fine, but I’m looking for ways to optimize it. Please suggest.

@ranierin in the release notes of Unreal Engine 5.6 it says this on NNE

NNERuntimeORT upgrade to ONNX Runtime 1.20 and upgraded DirectML to version 1.15.2.

which is nice, any other major updates or changes to NNE that we should be aware of ?

Thank you

@jvin1011 apologies for the late reply. I think your best chance is to try to reduce the model size, sorry :frowning: Alternatively try to get DirectML running, it should work if you are on a DirectX based system.

@gabaly92 We spent a lot of time in this release on NNERuntimeIREE. However, it is still work in progress and needs some expertise on how to adapt the model to get it running. But it shows great perfromance on CPU for small real time models due to it’s low overhead compared to other runtimes.

1 Like

Awesome thank you for the update

1 Like

Is here any way to read onnx file metadata? I guess not, and it’s the feature I’d like to see in future updates.

You can try netron, and here is its github. If you want to modify onnx file, try onnx-modifier. Hope it can be of help to you.

1 Like