Transfer of Texture bits to another application..


This is fairly advanced (to me) so I am asking if there is any better approach, ready code sample etc…

Basically this is what I am planning to do:-

  1. External app (Application A) is executed which subsequently execute another application B (Unreal Application).
  2. B will have SceneCapture2D so that the render target content is rendered onto a texture.
  3. B will also have staging texture map into this texture so that image bits are accessed fast (at 60hz or at least 30hz).
  4. The image bits will then be passed to Application A via memory map.
  5. Application A will decode this into a bitmap and then display it on a child window.


  • I am not entirely sure whether (3) is needed. And if there is a sample code, it is highly appreciated. I have never done staging texture programming, and working sample code definitely helps.

  • The whole process (1-5) is probably quite lengthy. What I am trying to do is external app which displays the 3d unreal content in a child window, so you can see B will keep passing rtt to A at 30/60hz and application B render window will be resized to (1,1) pixels. There is no need for render window, as we are interested in only the RTT.

hi syed you need understand 2 things, first will be, how the texture work? basically a texture like whatever image format in the cosmos is a huge array of bytes, dont worry right now for compression a image types, think about it, a rgba texture is compose for 4 bytes per pixel now multiply that per width and height and you get the total amount of bytes for a texture, this is called raw data and usually is inside a unsigned char* (AKA uint8*) research in google about that, you will see is not alien tech, if you want learn more about it please check this tutorial
and now second, you must understand about web socket connections, basically a socket is a latent conection beetween 2 or more machines by ip protocol, you can send the data bytes of whatever you want, there are several tutorials around the forum, i know is overhelmet but is good to learn


Actually I have done something similar in other engine, but in directx 9. Performance is absolutely critical as I dont want to copy the bits during the rendering tight loop (it may take a few ms if not done correctly).
To improve performance, what I did was create two RTTs, and when one RTT is locked, another thread will copy the bits (which will take a few ms). When the first RTT is locked, another is used to render so this way, the engine doesnt have to wait for one RTT to lock, copy the bits, and then unlock them back so renderer can use it back.

Wel, now I am using UE4 in DirectX11, and I read that staging texture can improve performance. So I am exploring ways on how to do something similar but done in DirectX11.

And also there is no need for web socket, it is memory map as this is very fast way to exchange data between two local applications

mmm afaik you can access the screen buffer by creating your own GameViewportClient class, i do what you describe some time ago with that and sockets with python, yes it have a little bit of delay, but i dont worry to much, mmm anyway if you want performance maybe you can access directly the gbuffer, check this thread: , i guy called Temaran modify the engine source an create some kind of cool post process, and he share the code “YEI”, mmm i think this is a better approach that mine, but you must learn by yourself i really dont know what Temaran exactly made

Best regards, happy hunting!

Thank you… that is quite a lead you gave so far…!