Is it practical to use UE5 as a realtime rendering engine via RPC?

I’m totally new to the Unreal ecosystem and looking for some initial pointing in the right direction. I’m working in a robotics application where I wish to simulate robot control + physics. Part of that involves simulating what a camera would see in order to inform the robot’s actions.

The robot simulation is already handled by a specific framework made for that purpose. All I need is to be able to describe a scene and camera placement, and get a rendered image. This needs to be highly photo-realistic, high FPS (as the simulation proceeds, I need to be rendering the state of the scene on the fly), and with low latency. So to be clear, I don’t need any physics or interaction. All I need is to load object models, place them in a scene, and take a snapshot, and then update the object placement for the next frame, then take another snapshot and so on…

Right now my working solution is a HTTP server using the Blender API for rendering. I’m able to tick the photo-realism box, but renders take seconds at a time. This is why I’m considering UE5. As I understand, it’s the closest thing there is to Blender Cycles that can render at realtime rates.

I downloaded UE5 and fired up the desktop interface and quickly realised that there’s a whole bunch of game design related concepts I don’t need. What’s more challenging is that they obscure the functionality that I need. So is UE5 the right tool for this job? Does UE5 have a rendering-only library I can use which decouples from the gaming/physics aspects? Am I able to use such functionality in the form of a standalone Python or C++ HTTP server that I can fire up via a command line?

If you’ve made it this far, please take a quick glance here RenderRpc: Render images through RPC communication between Drake and external renderers. · Issue #15915 · RobotLocomotion/drake · GitHub. Drake is the robotics library I want to use, and there is an RPC feature in the works. I just need to write a UE5 based server side application to go along with it.

1 Like

You can do it, but exactly how high frame rate it needs to be, depends.

It might be best to run the simulator and the renderer on the same machine, to at least remove the delay across the network.

You’ll want to receive the scene configuration (mainly, the robot kinematic pose) from the simulator, apply that to the pose of the robot actor in the scene, then render the scene, and then capture the rendered framebuffer, encode it, and send it back. This isn’t that different from using Pixel Streaming together with a networked game (rendering remote entities,) so maybe that part of the code would be where you could start looking into how to do this.

Also, unless you have real hardware in the loop, the simulation framerate doesn’t need to be the same as real life. You can run the simulation faster or slower than wall clock time, as long as the “clock” you use in the simulation is fully under your control, rather than using a local hardware timer.

1 Like

Thanks a lot for the response @jwatte. I will look up Pixel Streaming as a starting point.

Re your last point, yes we do have control over the sim clock. I said real time, but really I just want it to be as fast as possible so that rendering isn’t a bottleneck. The types of sims we’re running are about real time or slower on our compute.

Leaving the question open for a bit to see if anyone else drops me some leads.

While not exactly the same purpose, for a previous project, I worked on demonstrating Unreal, Unity, and a custom rendering engine to all display the same things as described from a remote client. In our case, we had a http socket server setup, which each of the rendering engines would talk to, and would receive commands from in order to configure the world.

It was merely a tech demonstration, so there wasn’t really a whole lot to it – we had a small pile of models and materials, which we imported and configured in each engine, and then we made an object that represented it in each engine (using templates for unity, blueprints for unreal), and then when we received the world configuration from the server, we just mapped the object names from the server to a table of items to spawn, and spawned that with the given location/rotation. There were additional commands from the remote to handle moving items around (ie, set new transform, move to x,y,z, rotate to r,p,y, etc).

Most of that was just implemented in a GameMode blueprint. It was not particularly difficult to do at all, although everything we had was custom designed for the purpose.

Had we gone beyond the demonstration phase, we were going to build a whole pipeline that would provide model data, material data, bring them into the engine from over the network, build them on the fly, and populate the world. But that didn’t happen.

1 Like

@eblade thanks that’s very useful to know. Also helps to know about “GameMode blueprint” - keywords like that are good cookie crumbs.

If by any chance there’s any open code related to this please do link me. Thanks again!

sadly, there is not. even my reference material on it seems to h ave disappeared, so i just have to describe things from memory.

basically, we just connected to a web socket when the game begins (GameMode BeginPlay), then accept commands that tell us what to do with the environment – spawn things, move things, destroy things. Although we intended on eventually being able to spawn things from network data (ie, ‘spawn this model… here’s the model data, and hte materials, etc’) we were just spawning pre-authored content into a world at predefined positions (scaled for different units, depending on if we were in unity, unreal, or our custom display engine), and putting it where the remote server told us to

1 Like