Course: Neural Network Engine (NNE)

Hello,

i am not very versed in that neural network stuff, but from what i understand, this version here only accepts onnx models. Unfortunately, the model i wanted to use with it, is in the pth format, and i could not find a idiotproof way to convert it to onnx. Where idiot-proof means no coding required, or python scripts or github, because i have zero clue about all of that :sweat_smile:
Are there plans to support other formats too, or would there be a chance to integrate a converter, so that we could convert other types to the requried onnx format?

The model i had in mind, was for depth maps generation from regular images (or screenshots or textures), DepthAnything V2:

If not, does anyone know a way to convert these models to onnx?

Hey @Suthriel different NNE runtimes can process different file formats. But currently, onnx is the one with the widest support.

pth files are state dictionaries: they contain the weigths but not the actual model, so you will not be able to convert the file directly. You will need to checkout the repo you mentioned, create the (empty) model using their python code, then load the dictionary into the model (checkout pytorch tutorials) and then you can simply export it with the regular torch.onnx export functions.

I recommend doing a couple of tutorials, model conversion is always a bit tricky, especially when you are new to this.

1 Like

Hey @tracexCZE sorry for the late reply.

Can you use a tool like netron.app to check on the output dimensions of the model you are loading? If it has batch sizes in the input, they typically also appear in outputs, so I would assume the output would be [batch, sequence_len, embedding_size]. Note, while NNE is doing checks on input shapes, it does not on output shapes. If you provide too few output memory (e.g. [batch, embedding_size]) some runtime may copy only the parts of the result that fits the memory and thus you may observe wrong results. You said it works when you use sequence size as batch size, that is actually the same as setting batch size to 1 and therefor have the output shape [1, sequence_len, embedding_size] and thus you allocate the right amount of memory.

Regarding deviation: Different runtimes have different implementations of operations and due to finite arithmetic produce slightely different results. Especially networks with many alyers can propagate and accumulate errors a lot. So it is normal that you get some differences between pytorch, tensorflow, onnxruntime etc.

I hope that helps understanding why your fix made it work :slight_smile:

@Suthriel I found the onnx version on hugging face here

onnx-community/depth-anything-v2-small at main (huggingface.co)

but when I loaded the model it did not work because of the ir_version, you can check the model compatibility table for onnxruntime here

Compatibility | onnxruntime

NNE is based on onnxruntime 1.14 if I am not mistaken so that means the model converted to onnx has to use max opset 18 as per the compatibility table and has to be max IR_version 8

when I loaded the model using NNE in Unreal 5.4.3 it failed because the IR version was 9, I loaded the model in Netron and it showed

opset 14 which is fine but showed IR_Version 9, so I think the model has to be exported again to onnx with an opset <= 18 and IR_version 8 if possible, I think that would depend on the model layers though, this is a very new model so it might need IR_Version 9 for the newer layers that it has

@ranierin correct me if I wrong, attached is the output log for the model load and the Netron model load, I used the int8 model from the above repo, all of them are IR_Version 9

2 Likes

Thanks Nico and @gabaly92 for the answer and unfortunate info :slight_smile: This conversion and phyton stuff is unfortunately way above my understanding and knowledge :sweat_smile: Thats why i hoped for an idiotproof (drag and drop) conversion method, that would give me the supported onnx version. And it seems that not even all onnx are supported according to gabalys tests :confused:

Still, thanks a lot for the info.

Edit: But would that also mean, that i could use this onnx version of DepthAnything, if NNE gets an upgrade to a newer onnxruntime version?

1 Like

@Suthriel @ranierin I was able to run DepthAnything V2 in NNE ORT CPU, found the proper model here

Releases · fabio-sim/Depth-Anything-ONNX (github.com)

tested with depth_anything_v2_vits_dynamic.onnx from the repo

The model is IR_Version 8 and Opset Version 17, I guess I misunderstood how the IR_Version works, it depends on the version of ONNX used to convert the model, so I guess for the current version of NNE (ONNX Runtime 1.14), we would need any model to be converted using max ONNX 1.13 which uses IR_Version 8 and max Opset Version 18 as long as the model layers are supported in that Opset 18 as per the compatibility table above.

@ranierin does this make sense ? correct me if I wrong

attached is the code, a sample image and the generated depth map

DepthAnythingActor.cpp (9.1 KB)
DepthAnythingActor.h (1.6 KB)

@Suthriel what do you want to build with the Model ? I might be able to help out

2 Likes

Sorry for the delayed response, life is currently being on the not so nice trip for me -.-´ whatever.

I have no concrete plans, more like seeing what is possible and testing, but the core idea was to find a way to create depth masks from images or video strems, that do not have their own depth masks (webcam stream, your regular video etc), so that you then still can use f.e. Niagara particle effects and have particles react to the correct positions/depths.

Like shown in this video here, where the particles form a 3D representation of what the test drone sees, based on the depth mask. Just that i would first need to create said depth mask from scratch and would not get it delivered free house from the engine ^.^
The depth mask gets applied at about the 20 minute mark.

Or just use the depth mask for displacement or WPO and convert your flat texture to a 3d structure in your level, so that other actors and light can interact more easily with it.

But what concerns me… is that everything i find so far seems to require C++ o.O even your actor (many thanks for that :slight_smile: ), but i would prefer to have a blueprint only version. is all of this actually supported or doable with blueprints too, or is this here so far more a C only project and plugin? :confused:

1 Like

This is awesome work @gabaly92, well done! So crazy what those models can do these days :slight_smile:

Your observations are correct, with ORT 1.14 (runtime), operators defined in onnx 1.13 (file format) can be used and up to opset version 18 (each operator has different opset versions, lower is fine as onnx provides backward compatibility). ONNX IR is fixed to 8. So if you export or convert a model with onnx 1.13 it should work. What you can do for the cases you wont find the onnx on huggingface, to write the python script to do donwload the torch (or tf model) and export the onnx manually (or with optimus). However, this can become quite fast a rabbithole… (Don’t ask how I know ^^)

@Suthriel Yes, if we are upgrading ORT inside our plugin, you should be able to easily drag and drop the file and it will work. We are working on the update but cannot give a timeline on this, sorry.

And you would still need to do c++: NNE provides the base infrastructure and thus is accessed through c++. Features that are built on top sometimes offer BluePrint binding (e.g. NeuralPostProcessing which can be accessed from the Material editor) but sometimes a c++ interface again.

Since you work on a (pretty cool) low level feature, you probably want to work on c++ even just performance wise and since you can embed more tightly into the engine (e.g. processing the webcam properly and frame aligned). So I would recommend you start with c++ as it will give you far more possibilities than only using the limited BluePrint functionality.

1 Like

Thanks for the info :slight_smile: the thing is, it is literally impossibe for me to useC++ stuff on my machine, since my Windows 10 version busted itself in such a way, that a necessary update, that is required for Visual Studio cannot be installed. Without this update, no Visual Studio, no C++ :sweat_smile:
And i tried long and hard to fix this, to get this update up and running, but yeah, turned out the only solution is to completely reinstall Windows. But my machine is also so old, that its not worth this trouble.
I have already decided, to get a new machine next year, about the time of Win10 end of life or end of service, so late summer - autumn. Then it will be a machine with parts selected with neural networks in mind.

So i am not in a hurry :slight_smile: this also means, that you have more than enough time for your upgrades, and i just keep an eye on the NNE progress ^.^
I also have much other stuff i want to look into and test, which luckily does not require VS or C++, so i can do that until i get my new work horse.

Have a splendid day and weekend :slight_smile:

1 Like

@ranierin thank you for confirming, yes this model is mind blowing, a pretty good depth map from just a single image and this not even using the biggest version of the model

1 Like

Hello Nico,

I’m currently upgrading our project from UE 5.3 to 5.4, and I noticed that the NNERuntimeORT plugin is set to EditorAndProgram. This setting seems to cause the runtime to be invalid in the packaged build.

Will this behavior be changed in the next version?

1 Like

@saxpenguin Yes, we removed the ORT runtime from standalone in 5.4 as we could not control how many CPU cores a network will occupy. Thus it would have pretty unusable inside a game as whenever you run a network you would potentially block the whole game.

We will re-add ORT to standalone and provide plugin settings (accessing ort settings underneath) to let the developer chose how many cores he would like to give for ML work or whether to run the netowrk completely on the calling thread, giving back control over CPU budgeting.

Meanwhile you can also try out NNERuntimeIREE: It requires you to export your neural network to .mlir rather than .onnx but shows great perfromance and is available in runtime too.

Happy coding!

2 Likes

Thanks Nico, I would like to try GPU support for our project. Which runtime would you recommend for optimal performance?

Best regards

@saxpenguin I would start with NNERuntimeORTDml, it is limited to DX12 capable devices, but is able to access dedicated hardware through meta-commands.

Hi Nico,
Thank you for the tutorial. This opens up yet another Pandora’s box of creative possibilities.

If you could make a another example may I suggest a topic ?

Take the pytorch Midas depth estimation model and convert it to ONNX and bring that in via NNE.

Aquarium example

  • Use the depth data coming in from a video stream to detect proximity to the aquarium
  • have an object react to proximity
  • A shoal of fish ( Niagara particles ) react to proximity

Doable ?

Cheers and here’s to a future with many creative Ai possibilities.

b

Hi @behram_patel thanks for the nice words. Nice idea for the tutorial but not sure when/if we can find time and resources to create new custom tutorials on our end. But I welcome you to create this tutorial on your own and contribute to the community :wink:

1 Like

If you get the correct ONNX version of Midas, then you can get it to work. Just scroll a little bit up, because we had depth estimation just a few posts ago :wink: But it was with DepthAnything v2:

2 Likes

I’m on it Nico.
This is for my Unreal Engine students, so my hope is other students would also be encouraged to try this out .

Can I ask my beginner questions on this thread or start another one and at you there ?

I will write out a plan of action of what I “think” I should be doing and perhaps you can correct / guide me if I’m doing something stupid ?

Cheers,
b

1 Like

@behram_patel, I deployed Midas and YoloV8 in the Stack O Bot sample project , would be happy to help if you are open to that, here is a demo of Object detection and Depth Estimation running via the OrtDML runtime, the first UI widget is what it sees, the second is the output depth map from midas and the third is the object detection from yolov8, I am using the smallest size for both models

4 Likes

Holy moly WOW !
Maybe you should be making the follow up tutorial to the first two :wink:

Yes , id definitely like some help on this.
Let me write what I think I should be doing and maybe you can correct me if i go a stray ?

Cheers and touch base soon.
b

1 Like