Course: Neural Network Engine (NNE)

@behram_patel if it is NNE specific questions, you can ask here. We have some real experts in here who may even be faster to answer than I am :sweat_smile:

Also check out the previous posts and tutorials that have been posted here, there are some real gems around already :slight_smile:

1 Like

He Nico,
Yes, my questions will be around NNE. I will manage the other aspects like Niagara , etc.
Looking through the wealth of information shared in all the posts.
Will write back soon.

Cheers,
b

Thanks for the heads up Suthriel.
I’m going through everyone’s posts and will write back soon.
Cant wait to dive into this (and break my head :stuck_out_tongue: )

Cheers,
b

Hi there,
Thank you for the offer. Most definitely appreciate a helping hand.

I am looking to ingest a camera feed (via a webcam) and generate depth from that.
Then process that to detect a proximity threshold to a plane (fixed distance).

I see that you are generating depth based estimation out of the frame buffer within the engine.

Have you managed to ingest a video stream from a live camera and process that ?
I’m guessing you’d have to use openCV2 as well ?

Thanks and look forward to the initial steps.

b

The way I do it is as follows

  1. get scene image as a texture from 2D scene capture component attached to the character
  2. extract texture data and use it to create a cv::Mat
  3. Preprocess the cv::Mat for Midas
  4. run inference
  5. get output depth map as a cv::Mat
  6. Convert output depthmap to texture
  7. using blueprints, update UI widgets using blueprints capture input frame and output depthmap, this is not the best pipeline, it can be optimized further but it works

I haven’t tested input from werbcam yet but I can imagine the steps would be like this

The way I do it is as follows

  1. Capture image from webcam using opencv videocapture(0)
  2. Preprocess the cv::Mat for Midas
  3. run inference
  4. get output depth map as a cv::Mat
  5. post process output depthmap to figure out which pixels are closer to the camera and which ones are farther away using the grayscale values in the depthmap

these are not the final steps, can you explain more the plane part ?

1 Like

Has anyone here tried to run NNERuntimeORT at runtime in IOS distribution builds? It always crashes for me: IOS library crashes in distribution configuration

I haven’t tried actually packaging a game yet, all my tests are on windows in the editor

Thank you :pray:t3:

The plane part explanation :

Lets imagine a web-camera is mounted on the TV which will show a UE scene - a fish model
When i detect that the user is close to the web -camera (tv screen) the fish object will run away.

In an Ideal scenario I would also use ocv to detect finger point / touch gesture + depth map to determine if the user intends to interact with the fish. But i’ll start with depth and once that works I will look into gesture recognition etc.

As a side question I’m guessing the inbuilt ocv plugin is working for you ?

Since there is no documentation for the inbuilt UE ocv plugin, I’m first trying to learn how to use openCV with UE. Most tutorials are pointing towards 4.27 but I’m close to getting it to work in 5.3 /5.4.

Thanks for your help. When I publish a tutorial I will give you credit for it.

Cheers,
b

1 Like

of course, the opencv build that comes with Unreal Engine is working fine for 5.3 and 5.4, it doesn’t need documentation, if you have any problems with opencv in Unreal Engine just imagine you are developing a regular computer vision application with opencv and debug your code based on that

for the fish application, here are the updated steps:

  1. Capture image from webcam using opencv videocapture(0)
  2. run inference with Midas to get depth map
  3. run inference with Object detector (ex yolo) to get person bounding box
  4. get output depth map as a cv::Mat
  5. extract person bounding box from depth map
  6. post process person bounding box extracted from depth map:
    depth map values are usually 0 (darkest ie farther away) to 1 (brightest ie closer)
    the most basic post processing step would be to average out all the pixels in the extracted
    bounding box and define a threshold value between 0 and 1 (ex, 0.7) that decides if the person is
    close or not
  7. use output from depthmap postprocessing in blueprints or C++ to control the fish character and it’s animations, you will have to expose your C++ NNE code to blueprints using the blueprint macros)

These steps should get the ball rolling on the app, and you can iterate from there if needed

1 Like

Thanks a a ton.
I’m working on exactly that. Debugging opencv and UE5 crashes :sweat_smile:
Right now the inbuilt open cv is crashing on me. I cant get it to show an image. So like you said I’m trying to make sure that’s working first.

Example header

#pragma once

#include "CoreMinimal.h"
#include "GameFramework/Actor.h"
#include "opencv2/core.hpp"
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include "Log.h"
#include "Misc/Paths.h"
#include "TestActor.generated.h"

UCLASS()
class ATestActor : public AActor
{
	GENERATED_BODY()

public:
	void TestOpenCV();

public:
	ATestActor();

Example C++


ATestActor::ATestActor(){

	//TestOpenCV();

}

void ATestActor::TestOpenCV(){
	FString RelativePath = FPaths::ProjectDir();
	FString FullPath = IFileManager::Get().ConvertToAbsolutePathForExternalAppForRead(*RelativePath);
	std::string path(TCHAR_TO_UTF8(*FullPath));
	UE_LOG(LogEndoVRCore, Log, TEXT("Testing OpenCV..."));
	cv::Mat img = cv::imread(path+"ThirdParty/Data/Lenna.png", cv::IMREAD_COLOR);
	cv::imshow("Display window", img);
	cv::waitKey(0); // Wait for a keystroke in the window
}

void ATestActor::BeginPlay(){
	Super::BeginPlay();
	TestOpenCV();
}
ATestActor::ATestActor(){

	//TestOpenCV();

}

void ATestActor::TestOpenCV(){
	FString RelativePath = FPaths::ProjectDir();
	FString FullPath = IFileManager::Get().ConvertToAbsolutePathForExternalAppForRead(*RelativePath);
	std::string path(TCHAR_TO_UTF8(*FullPath));
	UE_LOG(LogEndoVRCore, Log, TEXT("Testing OpenCV..."));
	cv::Mat img = cv::imread(path+"ThirdParty/Data/Lenna.png", cv::IMREAD_COLOR);
	cv::imshow("Display window", img);
	cv::waitKey(0); // Wait for a keystroke in the window
}

void ATestActor::BeginPlay(){
	Super::BeginPlay();
	TestOpenCV();
}

I’m getting a crash on the imshow() function.
This is the thread to the discussion is.

opencv not working in UE5

Cheers and thanks for your help.
b

@Zaratusa note that we dont officially support mobile yet with our NNERuntimeORT. Even more exciting it is that you got it already working on Android, well done! Please let us know how it goes with IOS.

1 Like

What error are you getting ? I don’t think showing an image using opencv in Unreal Engine is straight forward though, here are 2 ways you can fix that

  1. Read the image, save it using cv::imwrite and inspect it, if you see your saved file you are good to go

  2. In the Unreal Engine world images are textures, you can create a UI widget, initialize a texture using the read image and show it or in the game world, add a plane, create a texture using the image and apply it on the plane, that way you will see the read image

option 1 is more straight forward though

2 Likes

Thank you for your continuous help dude !

Ive realized i got my ■■■ handed to me because i tried to punch above my weight. Not only am I trying to wrangle NNE im also wrapping my head around openCV …that to in UE !

Nico,
I apologize for making a dedicated NNE thread Muddy with openCV.
Heres what im doing based on gabaly92’s previous advice.

1.Create a sample open CV c++ project and Gett comfortable with the process.
I have managed to set up my dev environment and dealing with common beginner mistakes. So im making progress there.

  1. Learn how to use DNN / onnx for inference using opencv on cpu and then GPU.
    This will happen nex week.

  2. Once i can do that I will come for the final Boss Fight with UE NNE here so i get opencv problems out of the way.

Many thanks for your help. And write back soon.

b

1 Like

@Zaratusa I am trying to package a project for Android (VR) using onnx models but when it tries to cook the models it says unsupported target. Did you managed to package an Android project with onnx models ? I would appreciate any help.

Ok small update.
1.Spent the week getting open cv C++ set up in visual studio
2. Built opencv from source to add GPU support
3. Got the midas model to run from visual studio on the GPU

Phew !
@gabaly92 can I ping you if I have trouble making this into a plugin (the openCV part not the DNN) ?
Perhaps I dont even need the plugin since the inbuilt opencv + UE 5.4 / 3 NNE will handle this.
(I hope the inbuilt opencv supports webcam input. This link says it doesn’t ).
Blink opencv UE 5 fork

Time for the final boss fight’s near :martial_arts_uniform:

Cheers,
b

1 Like

I’m using the NNERuntimeORT from the ue5-main branch, which has runtime support and overall simpler plugin structure. However the Android binaries have to be included none the less.

2 Likes

Sure, let me know if you have any questions, I did not create a plugin out of an NNE project yet, but regardless, let me know when you are stuck and I will see what I can do to help

1 Like

You are amazing.
Revert soon.
Thank you.
b

1 Like

Where did the NNE plugin go in 5.4.4? It doesnt show up if I search NNE or Neural Network Engine.

@Cyberqat The Core NNE API is now part of the Engine code, not a separate plugin

1 Like