Large world coordinate system question

I was wondering if Epic has given any thought to the large world coordinate system ticket that’s on trello backlog? ( Trello )
Does it include any kind of coordinate translation from large world coordinates to camera relative coordinates so that rendering would be done with acceptable precision? I read that Star Citizen is doing something like this.

Just asking because I’ve been struggling for months with rendering/animation system and how its precision falls apart when the player moves far away from world origin (no, I can’t use world origin offsetting because it doesn’t work in online games).
I tried taking examples from shooter game but it has the same kind of precision problem (accurate aiming with weapons is the problem, can’t do it with the gun jittering about): - YouTube

Bump. Hoping to get a dev response.

I would like some form of info on this as well :slight_smile:

Bump. Would still like some kind of answer to this.

Ditto. Very interested in a response to this.

More Bump on that…

Origin shifting/world composition are the only methods to do this, there are no plans to implement something like double-precision math in Unreal for enormous worlds. I have a source for that, but can’t find it atm. The main reason is that it doesn’t support or even behave nicely all architectures (even on PC), and it also adds a lot of cost. The engine has a much wider/different target audience than something like Star Citizen.

And you’re right, it doesn’t natively work in Multiplayer - but if you want to do a Multiplayer game on that scale without a full-time network programmer on your team (implementing support for it is a full-time job on it’s own), then I would suggest changing your idea to fit or switching to an engine that tailors to that kind of game, which isn’t something I’d say lightly.

You can create and have a special server architecture that monitors multiple worlds at the same time, but you would have to have dedicated servers to do so. It can be done with origin shifting, but it’s not easy.

A segmented world space coordinate system would be a good solution. There are papers about that floating around the internet but I’ve lost the one I read before.

I think it can be implemented as “megacoordinate” for objects transformations only. “Megacoordinate” means either double precision position or something else (bignum? probably overkill). Extra precision is needed ONLY for object position.

During rendering phase engine would need to extract coordinates convert them to floats and pass to rendering engine.

I.e. it should be easier than full conversion to double precision (lots of gaming hardware doesn’t support that properly, AFAIK - there’ll be performance loss, at least).

Alternatively you could have “cells” in 3d space, and coordinates within cells could be stored as floats.

In other words, it is doable, but not exactly trivial. Still, I think it is something worth investing into. 10 km region for this day/age isn’t a good thing, IMO.

Here’s the article I was looking for earlier: Best of Game Programming Gems - Mark DeLoura - Google Books

Combine that with client side origin rebasing, which would work as the basis normalization in the article.

I love this chapter. By the way it’s in Game Programming Gems Volume 4, chapter 2.3. I read it over and over again in hopes my brain wave telepathy will effect Epic’s decision, LOL.

If anyone has attempted or started to work with something like this, please let me know! I’d be very interested the approach.