Hey all, I was just wondering if we could have a brief, short update from the UE staff on the that feature labeled “Large World Coordinate System”](Trello) in the trello roadmap, under the “Core” category.
Besides being a feature that every once in a while is touched on by someone here in the forums and also elsewhere, it happens to be the highest voted card at the roadmap after the ongoing generic “Improved multiplayer support”. Indeed, the “Large World Coordinate System” curently has 711 votes, almost 200 votes more (33% more, to be precise) than the next best voted (“Blueprint->C++ conversion tool”, which by the way is coming now in May). Nevertheless, the “Large World Coordinate System” card was created on May 5th 2014, but never became more than Backlogged, i.e. never got an estimate date. Worse than that is the fact that, quite honestly, we seldom heard anything at all about it from UE staff no matter the posts here in the forums asking about large world issues or about handling float inaccuracy issues.
Then, I decided to post this thread as a mix of a reminder and an appeal to the UE staff to give us any update on that front: is the most popular feature request card in the roadmap still under consideration? Are there any real plans on implementing it in any forseeable future?
“large world coordinate system” means “make the engine use double precision”. I think the biggest problem is that PhysX does not support double precision, so even if Epic makes the whole engine use it, the physics will not be usable further away. So Epic might just wait for Nvidia to create a double precision version of PhysX.
As far as I’m aware, the change was focused around origin rebasing like the engine had developed as part of the large world system. It just doesn’t work in multiplayer currently. To the best of my knowledge, the “large world coordiante system” was in reference to making that work in multiplayer somehow.
I may have missed the part about full double precision calculations if it was mentioned elsewhere.
I think a “large coordinate system” does not sound related to world origin rebasing, since with world origin rebasing you don’t need a bigger coordinate system, you just move the world around inside of the small coordinate system.
For me, “large coordinate system” sounds like “we want to support a larger coordinate system”, with “large” meaning larger numbers inside the coordinate system. And larger numbers mean double precision support.
It might also just be a strange name, could be that you’re right and they talk about world origin rebasing. But just because I move stuff around inside of a coordinate system, the coordinate system is not getting any larger
I suspect it’s just the name throwing people off. “Large World Coordinate System” can get confused with a “Large Coordinate System” which could be an entirely separate thing.
In any case, the last time I heard about it was when the UE4 Kite Demo was done actually, with them adding in origin rebasing for level streaming since that more or less completely solves the problem. It hasn’t really had any developments in the multiplayer side of things though, which is the only feature that’s obviously completely lacking in the engine right now in regards to large world support. It clearly works well enough for the Kite demo to run on such a massive world, it’s a shame it can’t be used in multiplayer.
They’re rebasing the origin just like Daniel says, they’re not increasing the precision. If you’re going to discuss semantics, a coordinate system can’t be larger or smaller; that’s why it’s called Large World Coordinate System.
Many thanks all of you for the replies and I apologize for being late in following-up. But yes, I agree that the removal of UX-improvements for large worlds is a bad sign. Although I too have hopes that after Paragon is released, some features like the large-world one could come back to the table. In any case, I think at some point the UE4 could gift us with some words on the matter, specially considering how old that Trello post is and how popular it has become.
Oh no, about that I am almost sure you are wrong. For a few reasons. UE4 staff has already stated even back then that they do not have plans of going the double precision route. Which makes sense, considering the lack of support to double precision in middlewares the PhysX and even in GPU hardwares. There are a few non-double precision solutions that could be baked-up in a built-in feature. The famous “origin rebasing” or “floating origin” is often regarded as the best solution. But people usually forget that to implement it in a performant way for professional and demanding games, it is not just about manually moving all the world objects around the camera. That has to be implemented indeed with higher level modifications at the level of the engine: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.471.7201&rep=rep1&type=pdf
Something like that would be enough to enable a really performant solution for large worlds for both single and multi-player. And that’s an engine level modification that, sure, one could achieve by altering the source codes, but certainly is not an easy enough task to be performed in an ideal manner by many users. It being a built-in feature as once considered in that Trello post, would certainly unleash the power of large worlds far easier, to far more people.
Large Coordinate system was origin rebasing IIRC. They might be doing more work on it in regards to level streaming etc but we haven’t heard anything more.
As far as I remember, Epic said they won’t be doing double-precision because it would require changes accross the entire engine (which it would, and it’d be a nightmare). Anyone looking for that feature is best to let go of it straight away, never gonna happen.
You can still do all of your movement code using double-precision if you want to, then convert back to standard float vectors afterwards. This will give you a significant difference in precision around the 20K unit and above mark, it’s what we did for our game which required objects of human-scale in Earth orbit (1cm = 1km for us)
I recieved a PM asking how we did our stuff in double-precision. It’s very easy so I’ll repost here.
Essentially the first thing I did (because we knew we’d need it), was create a new math library (FMathD) and a new Vector class (DVector, but should have been FVectorD or something) - which is essentially a carbon-copy of the FMath and FVector classes from the engine, but replaces all the non-templated functions with doubles instead of floats. Most of the std math functions that are called under the hood natively handle doubles and floats so this was very easy. The thing to remember is to REMOVE all of the
's from the end of your numbers and replace with
instead, to ensure the compiler knows to use doubles.
The only thing I added to the DVector, was a ‘ToVector()’ function, which just returns an FVector made from the doubles (casted to floats). Although initially this seems counterintuitive, it’s not - the FVector is perfectly capable of storing precise values in the six-figure region, but the precision loss comes from all of the conversions and operations you perform getting to that value. Do your movement code in all doubles, and transform that to a regular FVector to use for Position / Velocity etc. at the end. This pretty much gets around all the major issues around lack of double-precision.
Without doing this, our orbit system would have never worked - especially at high framerates where the delta time is absolutely minute. Multiplying huge values like position (in KM) by a value like 0.008f causes significant precision loss. Converting to doubles massively reduces that.
That card did not refer to ‘origin rebasing’ (which is already supporting) but actually changing the engine to use double precision for world positions. After doing some investigation, this just doesn’t seem like the right thing for us to work on at the moment. It would be a HUGE amount of work (basically every FVector and FTransform could potentially change), and would probably make upgrading rather painful for users.
Any plans to implement less invasive solutions, like local level coordinates (where each level have it’s own local coordinates), and world coodinates are represented by say int64 ?
Basically something like Game Programming gem 4, chapter 2.3.
I thought about doing it myself and then PR, but I don’t have any real idea where to start with it. Aside from modifing math library to account for new world coodinates.
I think one problem is that the thing where you notice the precision issues most (physics) is done by PhysX, and Nvidia does not have a double precision version of PhysX as far as I know. So even if UE4 would support double precision for everything it does, that would only affect rendering stuff, but nothing that’s related to physics (traces, collision, overlaps etc) since that will still run with single precision unfortunately.
That was three years ago now, and with how ambitious games are getting in terms of scale it’s only a matter of time. I would be very surprised if PhysX didn’t either already support it in the current version, or wouldn’t add it in the next year. Since there were already plans for it back in 2014, I’m actually pretty curious if anyone would be able to find out if it turned into anything.
I honestly don’t think that adding double precision is solving problem. It just moves it further in space. But it’s still not enough precision to simulate ie, solar system. I know that’s pretty “WTF entire solar system !?”, but still.
Of course it would still help to have high precision margin before things will start to fall apart.
But we still need proper solution on top of it, like hierarchical coordinates or spatially tiled world.
Tiled world still will be limited to the top coordinate system (like int64), but it will be much, much bigger than what doubles will give you.
Doesn’t double precision mean that every calculation gets slower due to the transport of 64bit instead of 32? Not to mention that double is not supported on many GPUs yet. It would mean the calculations would run on the CPU.