Hello
I have the following code, it works well.
TArray<FColor> PixelBuffer;
RenderTargetResource->ReadPixels(PixelBuffer);
// call a C API from .dll written in Rust.
SendFrameFunc(PixelBuffer.GetData(), PixelBuffer.Num() * 4/*RGBA*/);
And in my Rust code, the pixels are copied into a shared memory region and send to another process.
pub unsafe extern "C" fn send_frame(frame: *const ffi::c_char, size: u32) -> i32 {
// ...
let mut region = MemoryRegion::new(size);
let dest = match region.map(..).unwrap();
let dest_ptr = std::ptr::addr_of_mut!(*dest);
std::ptr::copy_nonoverlapping(frame, dest_ptr as *mut i8, size);
message.memory_regions.push(region);
//...
}
The above code works fine except it is low effecient. Each frame size is 8M bytes. The frame is read into PixelBuffer
then copied into the memory region.
I want the ReadPixels
method directly load pixels into shared memory to avoid copying.
I see TArray
has a constructor TArray ( const ElementType* Ptr, SizeType Count )
accept the memory address. I think I can use it to construct a TArray
backed by my memory region.
- I don’t need
TArray
free up my memory. Would it free upElementType* Ptr
on deconstruction? how can I avoid that? by implement my own Allocator? - I will initialize
TArray
with the exact memory size as the pixels needs. So I assume reallocation would never occur?