• News

• Industries

• Learning & Support

• Community

• Marketplace

# What happened to the "large world coordinate system" feature, the highest voted in the roadmap?

For CPUs difference is marginal. for GPUs it is literally 2 times slower.
For rendering performance, you would still need to convert doubles to floats.

Yeah, for rendering you can always shift the origin for the player around locally so that the player is in the center of a 10km radius where single precision is enough. Only on the server where multiple players are connected in different regions of the world you definitely need double precision.

You can use double-precision in your engine and do origin shifting before converting to float for the GPU. I did it back in the XNA days and it worked beautifully. The precision errors get pushed away from the camera in proportion to their imprecision. What you can’t prevent is the extra data which must be sent to the clients. I should note that by “origin shifting” I mean using the camera’s position as the origin.

First of all apologize for quoting an almost 1 year old post… but I cannot help to ask this: Does this mean all the positions + rotations need to be calculated manually (using DVector) ? How about the objects which have submeshes or animations (that get calculated automatically deep inside the engine using float)?

You have to do those calculation within camera space.
You have position in doubles, in simplest case you just
ObjectPosition - CameraPositon = LocalPosition and you pass this as float to rendering engine.
In Unreal you cloud probabaly use Origin Rebasing to do all visual calculations and use doubles to calculate positions within world space, but then still you need to transform world space in doubles into local space floats.
Haven’t done it, so IDK about implementation details, but that’s high level overview, how to do it.

Essentially you calculate all of your movement manually, then you call “Set World Location and Rotation” or whatever function you need from the engine, and you just convert back to a Vector. For example.

``````

FVector UnrealLocation = GetActorLocation();
DVector MyDoubleVector = DVector(UnrealLocation);

// Calculate All Movement using Doubles ONLY. Cast any floats like 'DeltaTime' to doubles. 'MovementSpeed. is a double, for example.
DVector NewVelocity = DVector::ForwardVector * MovementSpeed * (double)DeltaTime;
MyDoubleVector += NewVelocity;

// Convert back to FVector
FVector NewUnrealLocation = FVector(MyDoubleVector.X, MyDoubleVector.Y, MyDoubleVector.Z);
SetWorldLocation(NewUnrealLocation);

``````

This gave us the precision we needed. Some caveats however, are that you need to calculate collision rejection yourself (not a problem for us, we don’t have any collision) - and PhysX etc. won’t work this way either (not a problem for us, we don’t have any). They key take away is that all math operations need to be performed as doubles at all stages - or you will lose precision. Lot’s of multiplication and division in our case eventually degraded the accuracy of floats enough to cause jitter. Doing all the math in doubles and converting back fixed that for us.

This approach worked for us because it’s a unique use-case, it won’t work for everyone.

The best way to achieve large-world precision in a typical game would be to set the camera at world origin, and rotate the contents of the world around it. This is what Kerbal Space Program does btw, and it disguises the problem enough for them to get away with it.

That’s how I did it when XNA was a thing. You use doubles in your engine, translate to camera-as-origin, and then cast down to floats for the GPU. The problem is disguised because the precision errors grow with distance, but so does visual significance. IMO you don’t just get away with it, it works flawlessly.

Edit Oh, I already said this in a post above. Apologies.

What have you actually done that we voted on?

Seems Unigine support large world system(Unbounded Worlds)

Really need this.

Curious if there are any updates from Epic regarding this, it is still a highly requested issue.

Both Unigine and Star Citizen are able to support 64 bit positions and physics, in fact one team swapped from UE4 to Unigine for this reason: https://devblog.dualthegame.com/2016…-to-unigine-2/.

Can we reconsider such an addition in principle? what are the bare minimum modules that would need to be updated to support 64 bit positions? Even just highlighting the systems that would likely need changing could open the door for smart pull requests. If done right with code architecture, this feature could be toggle-able for other platforms that may not benefit from it (that said even mobile phones are 64bit natives these days).

Regarding 64bit physics, Has NVidia provided an update on 64-bit calculations in physx for CPU?

It would also certainly enhance the case for using UE4 in simulation environments.

Since they are nowadays pushing the engine to “enterprise” projects beyond games, yes this a must have, specifically for training/military simulations.
​​​

Epic seem a bit too comfortable to care right now…
Plus, PUBG / ARK have already worked around it…
And they’re 2 that could have applied pressure etc.

I think that Epic are busy with other things and are stretched a bit thin.

My 2 cents.

As long as PhysX itself do not support double precision it’s very unlikely that UE will.

Not out of the box. PhysX solution to this is:
http://docs.nvidia.com/gameworks/content/gameworkslibrary/physx/guide/Manual/OriginShift.html

and unless something changed in the last two years, this is the reason for partial support of doubles:

I think Squad devs are trying to implement Multiplayer World Origin Rebasing into their game by using the functionality provided by the engine, but I read that it requires them to go over all RPCs and check that fvectors and other stuff regarding coordinates are sent in a consistent manner.