I mean not to use render to file and load file as texture, but see in realtime.
Let’s think of it as of TV that installed within level. We push button and sequencer start playback. We can see it’s output. If we see something wrong - we fix it sequencer and play again.
It could be useful if we make a content for complex screen with sequencer. Than we make a model of our complex screen and set sequencer’s output to it’s texture. So we can see how it will looks.
Have you looked at scene capture components and render targets?
Not yet. Does it work with sequecer output?
Use a scene capture component instead of the camera your are using in the sequencer, it works just as a standard camera but can output to a RenderTarget which is used as a standard texture but is updated each tick to show the camera output.
So it can be used in both UMG and inside a material applied to something like a TV.
I don’t think sequencer will treat is as a camera though, so you might have to attach it to the camera used in sequencer. Or build something with blueprints that make the scene capture component follow your camera
Great idea! I will check it.