I’m trying to work out if I’m thinking about this in the right way.
The goal is to create realistic looking, custom routine workout videos in nice scenery.
So something like:
rigged character in a real time scene
external DB like Firebase sends commands of which animation sequence to perform and how many times, and runs through it with linked blueprints
camera movement controlled similar to above
Is the best way to ‘film’ this workout happening in the editor just by using screen recording software? I’m trying to keep this as simple as possible especially at the proof of concept stage where a load of variations will be recorded. If you’ve seen anyone else doing this I’d love to check it out.
Thanks @TechLord, yes I think that’s what it would be. I can’t find much in the way of other examples for my specific use case though.
I guess my ultimate question is: Could this plan work to deliver the scene dynamically in the viewport at good quality without going through rendering in order to screen record for upload to YouTube etc?
Edit: To add some additional context, these videos might be from seconds long to 30+ minutes which is why screen recording rather than rendering each would be great. No matter the length of video, the character would be in the one place just doing more or less repetitions of the animations. The scenes do not get more complex or bloated with more characters or scenery if they’re long vs short sequences.
If that is all possible, is my main hurdle keeping the environment optimised so that the hardware can handle the viewport output for long enough without heating up and dropping quality?
My approach to a Run-time Animated 3D Scene Constructor be in two systems:
Scripting Interface to execute a ‘script’ at runtime. This would be used to instruct Actors to move, animate, transform, swap materials, and trigger events/VFX/SFX.
Runtime Editor to manually / procedurally (script) to orientate, move, animate, transform, swap materials on actors and orientate event triggers.
I would replicate all of these features so multiple producers/developes could collaborate scenes simultaneously.
But I could be totally off track here, and maybe you’re talking about using Sequencer?
As far as recording for Youtube, you would use standard screen capturing software like Bandicam, OBS, or other. Optimized rendering is the UnrealEngine’s job, i personally dont worry about that thanks to Nanite, Auto LOD, Virtual Textures, etc.
Great info! Points 1 and 2 aren’t concepts I fully grasp yet but they seem the right concept vs Sequencer from what I can tell.
I assume Sequencer would be more time consuming in the long run or only used for standard intro/outro animations if any.
So Scripting Interface + Runtime Editor seems like it will best allow the swap out/in of actors at preset locations (where they stay) and have them follow the predefined script of animations pulled from the database. Because the database will have many different script combinations available to use, Sequencer is probably inefficient there to have to be redone for each combination.
Thanks again, I’m going to try and drill down on those concepts