Hi,
I want to capture frames from my app and then send it to opencv for some further porccessing.
Firstly I tried to get each rendered frame (from Blueprints and from C++) and convert it to image, so that I get an image of a frame in memory and later send it with something like rtsp to opencv. But it looks like approaches that I’ve found for capturing rendered frame are kinda slow, so my fps drops.
Later on I’ve found PixelStreaming, I tested it in a browser and it works like a charm, very smoothly. I thought about some ideas:
Convert WebRTC to something like rtsp with video stream only, and send it to opencv.
Create another WebRTC server that is already capable of re-streaming to RTSP, like SRS WebRTC | SRS I tried it for a little bit, but failed to receive pixels from Unreal.
Then I thought that maybe it is possible to get encoded video stream straight from Unreal app? So in C++ I would catch it and send to opencv, or use it with opencv on the same machine.
It sounds like a simple task having that Pixel Streaming working already, but I didn’t find any detailed descriptions. What’s my best option here?
The FGameplayMediaEncoder uses this delegate to be notified when the frame is done drawing and receive references to the frame buffers on the GPU so it can encode them for streaming etc. I believe this was how the older versions of Pixel Streaming worked as well, but it may be different now.
You can use also NDI for that.
1- Download NDI UE5 Plugin
2- Download NDI Basic C++ SDK for Windows
3- You can use NDI Basic’s frame receiver to get buffer as uint8 pointer.
4- Just convert it to cv:: object (I don’t remember its name) to further process.
NDI plugin use SceneCapture2D component and Render Target.
With these, you can seperate your game/simulation view and process buffer.
For example, maybe you want to process data from third person perspective (processing your playing)