Gameplay from multiple angles captured into video files

I have a 3rd person game wherein I want to place static cameras at arbitrary positions. I want those cameras to capture the gameplay and then save the result of their capture in separate files once the gameplay ends.
I’ve researched different ways of doing it (e.g. take recorder, sequencer, Render Target 2D) but didn’t happen to be successful with any of them. Can someone please help through solving this problem?

It should be possible for each camera to render to a render target, and then separately use a video encoder to compress those targets to video.
However, most accelerated encoders on graphics cards only do one or a couple of video streams in hardware, so you may need to use a beefy system to be able to compress them all while capturing. (You can’t really “capture first, then compress” – the amount of RAM used would be too much.)

You’ll very likely need to write C++ code to open the video encoders in question, and funnel the data from the captured render targets to each encoder, unless you can find a plugin on the marketplace that already does this for you.

thanks @jwatte . So you think the best way to approach is indeed with render targets. If encoding in real time would be too RAM intensive, what do you think about having some intermediate step, such as saving all frames in disk at a certain period, and then only in the end, crawl through folders for each camera and encode them into video files?

Encoding in real time is not too RAM intensive. Saving all the frames to memory and encoding after the fact will be too RAM intensive.

Let’s say each frame is 1920x1080x4 bytes. That’s about 8 MB per frame, or 240 MB per second at 30 fps. If you have eight cameras, that’s about 2 GB per second write rate to your disk. First, most disks and systems can’t actually sustain that. Second, you’ll consume a Terabyte of storage in less than ten minutes. If this matches your use case, then that might still be fine.

You are right indeed. That would be infeasible for my system. I intend to be able to run this on a MacBook Pro with M1 Pro. Now, time wouldn’t be a problem, if I need to wait for some hours until the process is complete, that is acceptable.

What do you think about the feasibility of using take recorder instead, where the cameras are captured as sources and afterwards there is a rendering job to export video from all of them? I tried this but I’m not sure if I’m doing it right since I only manage to the the rendered output to be the Camera Cuts, not specific selected cameras.

Recording the action as a replay, and then rendering each camera separately, is likely to be a much more compatible approach.
If you’re doing this for after action review, you might then also have the option of moving a camera around and view other perspectives, too.

1 Like

Nice. Any tips and pointers on how I should architecture this solution, and how to capture gameplay as replay and render afterwards?

Record and replay is available as part of the network subsystem, so if you build your experience as a properly replicated/networked game, it should Just Work ™